blAD E ChAp n是什么意思啥意思

{{offlineMessage}}
JavaScript is uitgeschakeld
Schakel javascript in en vernieuw de pagina
Schakel cookies in en vernieuw de pagina
CV: {{ getCv() }}From Wikipedia, the free encyclopedia
The IBM BladeCenter was 's
architecture, until it was replaced by Flex System. The x86 division was later sold to
BladeCenter E front side: 8 blade servers (HS20) followed by 6 empty slots
BladeCenter E back side, showing on the left two
and two . On the right side a management module with console cables.
supercomputer () has 86 Blade Centers (6 Blade Center E on each computing rack)
Introduced in 2002, based on engineering work started in 1999, the IBM BladeCenter was relatively late to the
market. It differed from prior offerings in that it offered a range of x86
server processors and input/output (I/O) options. In February 2006, IBM introduced the BladeCenter H with switch capabilities for
A web site called Blade.org was available for the blade computing community through about 2009.
In 2012 the replacement Flex System was introduced.
The original IBM BladeCenter was later marketed as BladeCenter E with 14 blade slots in 7U. Power supplies have been upgraded through the life of the chassis from the original 1200 to , 2000 and 2320 watt.
The BladeCenter (E) was co-developed by IBM and
and included:
14 blade slots in 7U
Shared media tray with ,
One (upgradable to two)
Two (upgradable to four) power supplies
Two redundant high-speed blowers
Two slots for Gigabit
switches (can also have optical or copper pass-through)
Two slots for optional switch or pass-through modules, can have additional Ethernet, ,
functions.
BladeCenter T is the
version of the original IBM BladeCenter, available with either AC or DC (48 V) power. Has 8 blade slots in 8U, but uses the same switches and blades as the regular BladeCenter E. To keep
/ ETSI compliant special Network Equipment-Building System (NEBS) compliant blades are available.
Upgraded BladeCenter design with high-speed fabric options. Fits 14 blades in 9U. Backwards compatible with older BladeCenter switches and blades.
14 blade slots in 9U
Shared Media tray with Optical Drive and
One (upgradable to two)
Two (upgradable to four) Power supplies
Two redundant High-speed blowers
Two slots for Gigabit Ethernet switches (can also have optical or copper pass-through)
Two slots for optional switch or pass-through modules, can have additional Ethernet, Fibre Channel, InfiniBand or Myrinet 2000 functions.
Four slots for optional high-speed switches or pass-through modules, can have 10 Gbit Ethernet or InfiniBand 4X.
Optional Hard-wired serial port capability
BladeCenter HT is the telecommunications company version of the IBM BladeCenter H, available with either AC or DC (48 V) power. Has 12 blade slots in 12U, but uses the same switches and blades as the regular BladeCenter H. But to keep
/ ETSI compliant special NEBS compliant blades are available.
Targets mid-sized customers by offering storage inside the BladeCenter chassis, so no separate external storage needs to be purchased. It can also use 110 V power in the North American market, so it can be used outside the datacenter. When running at 120V, the total chassis capacity is reduced.
6 blade slots in 7U
Shared Media tray with Optical Drive and 2x
Up to 12 hot-swap 3.5" (or 24 2.5")
drives with
0, 1 and 1E capability, RAID 5 and SAN capabilities optional with two SAS RAID controllers
Two optional Disk Storage Modules for HDDs, six 3.5-inch SAS/SATA drives each.
4 hot-swap I/O switch module bays
1 Advanced Management Module as standard (no option for secondary Management Module)
Two 950/1450-watt, hot-swap power modules and ability to have two optional 950/1450-watt power modules, offering redundancy and power for robust configurations.
Four hot-swap redundant blowers, plus one fan in each power supply.
Modules based on
processors from .
(2008) Features
Celeron Single core 445 to Quad-Core Intel Xeon processors up to 2.83 GHz
slots (up to 24 GB)
Hot-swap drives
(2002-6) Features
Inside of IBM HS20 blade. Two 2.5 inch disk drive bays are unoccupied.
One or two Intel
DP processors (single or dual-core)
Option for one or two 2.5" drives (,
(SAS) depending on generation)
Two Gigabit Ethernet ports
One expansion slot for up to two additional ports (Fibre Channel storage, additional Ethernet, Myrinet 2000 or InfiniBand)
(2007-8) This blade model can use the High-speed IO option of the BladeCenter H, but is backwards compatible with the regular BladeCenter.
One or two Intel Xeon DP processors (dual or quad-core)
Option for one or two
2.5" drives
Two Gigabit Ethernet ports
One expansion slot for up to two additional ports (Fibre-Channel storage, additional Ethernet, Myrinet 2000 or InfiniBand)
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or InfiniBand 4X)
(2007-8) This blade model can use the High-speed IO option of the BladeCenter H, but is backwards compatible with the regular BladeCenter.
One or two Intel Xeon DP processors (dual or quad-core)
Eight DIMM slots
Option for one
2.5" drive or one or two -based Solid State drives
Two Gigabit Ethernet ports
One expansion slot for up to two additional ports (Fibre-Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
4X as well as additional Fibre-Channel or Ethernet ports)
(2009–11) Features
One or two Intel Xeon 5500 or 5600 series processors (up to 3.6 GHz 4c or 3.46 GHz 6c)
slots (up to 192 GB of total memory capacity)
Option for two hot swap
2.5" drives or Solid State drives ( 0 and 1 are possible)
Two Gigabit Ethernet ports (Broadcom 5709S)
slot (standard -Express daughter card) and 1
slot (high-speed -Express daughter card) for a total of 8 ports of I/O to each blade, including 4 ports of high-speed I/O
Requires Advanced Management Module
(2010–11) Features are very similar to HS22 but:
18 DDR-3 VLP DIMM slots, up to 288 GB of total memory capacity
up to two 1.8" disks (SSD, not hot swapable)
Requires Advanced Management Module
(2012) Features
Single-wide
One or Two Intel Xeon E5-2600
16 Dimm slots, up to 1600mhz
2 hot-swappable HDDs (SATA/SAS) or SSDs
Dual 10G/1G Ethernet onboard expandable to 4x10
Virtual Fabric / vNic's onboard
Requires Advanced Management Module
(2012) Features:
Single-wide
One or Two Intel Xeon E5-2400
12 Dimm slots, up to 1600mhz
2 hot-swappable HDDs (SATA/SAS) or SSDs
Dual Gigabit Ethernet onboard ports with TOE
Requires Advanced Management Module
(2004) Features
Double-wide (needs 2 slots)
One to four Intel Xeon MP processors
Option for one or two ATA100 2.5" drives
Four Gigabit Ethernet ports
for up to four additional ports (Fibre Channel storage, additional Ethernet,
(2008) This blade model is targeted to the workstation market
Single Intel Core 2 Duo processor
slots (8 GB Max)
video adapter
One SATA 60 GB HDD
Two Gigabit Ethernet ports
(2010–11) This blade model is targeted at the server virtualization market.
2 expandable to 4 Intel E7, 6500 or 7500 series Xeon (4-10 cores per CPU, up to 2.67 GHz)
slots (256 GB Max). Expandable to 40 slots with a 24 DIMM MAX 5 memory blade (640GB total).
Two Gigabit Ethernet ports per blade.
Modules based on
processors from .
(2005-6) Features
One or two
processors (single or dual-core)
slots for DDR1 VLP DIMM's
Option for one or two SCSI U320 2.5" drives
Two Gigabit Ethernet ports
One expansion slot for up to two additional ports (Fibre Channel storage, additional Ethernet,
Inside of IBM LS21 blade. Small circuit board visible on the bottom right is an optional Fibre Channel daughter card.
This blade model can use the high-speed I/O of the BladeCenter H, but is also backwards compatible with the regular BladeCenter.
One or two
processors (dual-core), support 65nm Quad core after BIOS update (tested)
slots for up to 32 GB of RAM
Option for one
or SATA 2.5" drive
Two Gigabit Ethernet ports
One expansion slot for up to two additional ports (Fibre-Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
(2008) Upgraded model of LS21.
One or two 45nm
processors (quad or hex-core)
slots for up to 64 GB of RAM
Option for two
or SATA 2.5" drive
Two Gigabit Ethernet ports
One expansion slot for up to two additional ports (Fibre-Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
(2006-7) This blade model can use the High-speed IO option of the BladeCenter H, but is backwards compatible with the regular BladeCenter
Double-wide (needs 2 slots)
One to four
processors (dual-core)
slots for up to 64 GB of RAM
Option for one or two
2.5" drives
Four Gigabit Ethernet ports
Two expansion slots for up to four additional ports (Fibre-Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
Data Storage Capacity ?
(2008-9) Upgraded model of LS41.
Double-wide (needs 2 slots)
One to four
processors (quad or hex-core)
slots for up to 128 GB of RAM
Option for one or two
2.5" drives
Four Gigabit Ethernet ports
Two expansion slots for up to four additional ports (Fibre-Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
Modules based on
processors from .
(2006) Features
processors at 1.6 or 2.2 GHz
slots for PC2700 ECC memory (max 8 GB)
Option for one or two ATA100 2.5" drives
Two Gigabit Ethernet ports
One expansion slot for up to two additional ports (Fibre Channel storage, additional Ethernet,
(2006) This blade model can have the High-speed IO option of the BladeCenter H, but is backwards compatible with the regular BladeCenter
Can do , offering
capabilities.
single-core processors at 2.7 GHz or two
dual-core processors at 2.5 GHz
slots for PC2-3200 or PC2-4200 ECC memory (max 16 GB)
Option for one or two
2.5" drives
Two Gigabit Ethernet ports
One expansion slot for up to two additional ports (Fibre Channel storage, additional Ethernet, Myrinet 2000 or )
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
(2009) Features
Can do , offering
capabilities.
dual-core processors at 4.0 GHz
slots ECC Chipkill
memory (max 32 GB)
drive up to 146 GB
Integrated Virtualization Manager(IVM)
Two Gigabit Ethernet ports card
One expansion slot for up to two additional ports (Fibre Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
(2009) Features
Can do , offering
capabilities.
dual-core processors at 4.2 GHz
(32 per processor)
slots ECC Chipkill
memory (max 64 GB)
up to 300 GB or one 69 GB
Integrated Virtualization Manager(IVM)
Two Gigabit Ethernet ports card
One expansion slot for up to two additional ports (Fibre Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
One CIOv PCIe expansion slot
Can do , offering
capabilities.
dual-core processors at 4.2 GHz
(32 per processor)
slots ECC Chipkill
memory (max 128 GB)
Zero to two 2.5"
up to 300 GB or zero to two 69 GB
Integrated Virtualization Manager(IVM)
Two Gigabit Ethernet ports card
One expansion slot for up to two additional ports (Fibre Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
Two PCIe CIOv expansion slots
Double-wide form factor
Can do , offering
capabilities.
dual-core processor at 3.8 GHz
slots ECC Chipkill
memory (max 64 GB)
Zero to two 2.5"
drive up to 146 GB
Integrated Virtualization Manager(IVM)
Two Gigabit Ethernet ports, with optionan dual Gbit card
One expansion slot for up to two additional ports (Fibre-Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (4 Gbit/s Fibre Channel, , or
Can do , offering
capabilities.
quad-core processor at 3 GHz
Two Gigabit Ethernet ports, with optionan dual Gbit card
Features are very similar to PS700, but
Eight-core processor
max 128 GB
Think two PS701 tied together back-to-back, forming a double-wide blade
Features are very similar to PS701, but
Two eight-core processors at 2.4 GHz
Think two PS703 tied together back-to-back, forming a double-wide blade
Modules based on
processors from .
Double-wide (needs 2 slots)
at 3.2 GHz
1 GB XDRAM Memory (512 MB per processor)
40 GB IDE100 2.5" drive
Two Gigabit Ethernet ports
4X connectivity
Single-wide
at 3.2 GHz
2 GB Memory (1 GB per processor)
Two Gigabit Ethernet ports
One expansion slot for up to two additional ports (Fibre-Channel storage, additional Ethernet,
One High-speed expansion slot for up to two additional ports (10 Gbit Ethernet or
Single-wide
processors at 3.2 GHz
Up to 32 GB of
RAM memory
Two Gigabit Ethernet ports
Expansion slots for
daughter card,
4X DDR daughter card and 8 GB uFDM Flash Drive
Themis computer announced a blade around 2008. It ran the Sun
from . Each module had one
with 64 threads at 1.2  GHz and up to 32 GB of
processor memory.
Developed in conjunction with CloudShield, features
Single-wide
Based on Intel IXP2805 network processor
Full payload screening with
Full Layer 7 processing and control
User programmability using
and RAVE open development language
Quad 1 Gbit + quad 10 Gbit Ethernet controllers
Up to 20 Gbit/s DPI throughput per blade
Selective traffic capture, rewrite and redirect
Has LAN and WAN Interfaces
A schematic description of the TriBlade module
supercomputer used a custom module called the TriBlade from 2008 through 2013. An expansion blade connects two QS22 modules with 8 GB RAM each via four
x8 links to a LS21 module with 16 GB RAM, two links for each QS22. It also provides outside connectivity via an
4x DDR adapter. This makes a total width of four slots for a single TriBlade. Three TriBlades fit into one
The BladeCenter can have a total of four switch modules, but two of the switch module bays can take only an Ethernet switche or Ethernet pass-though. To use the other switch module bays, a daughtercard needs to be installed on each blade that needs it, to provide the required , Ethernet,
function. Mixing of different type daughtercards in the same BladeCenter chassis is not allowed.
switch modules were produced by IBM, , and .
produced some switches, and later was purchased by IBM. In all cases speed internal to the BladeCenter, between the blades, is non-blocking. External Gigabit Ethernet ports vary from four to six and can be either copper or .
A variety of SAN switch modules have been produced by , ,
(acquired by Brocade) and
ranging in speeds of 1, 2, 4 and 8 Gbit Fibre Channel. Speed from the
switch to the blade is determined by the lowest-common-denominator between the blade HBA daughtercard and the
switch. External port counts vary from two to six, depending on the switch module.
A InfiniBand switch module has been produced by . Speed from the blade InfiniBand daughtercard to the switch is limited to IB 1X (2.5 Gbit). Externally the switch has one IB 4X and one IB 12X port. The IB 12X port can be split to three IB 4X ports, giving a total of four IB 4X ports and a total theoretical external bandwidth of 40 Gbit.
Two kinds of pass-through module are available: copper pass-through and fibre pass-through. The copper pass-through can be used only with Ethernet, while the Fibre pass-through can be used for Ethernet,
Bridge modules are only compatible with BladeCenter H and BladeCenter HT. They function like Ethernet or
switches and bridge the traffic to . The advantage is that from the Operating System on the blade everything seems normal (regular Ethernet or
connectivity), but inside the BladeCenter everything gets routed over the .
High-speed switch modules are compatible only with the BladeCenter H and BladeCenter HT. A blade that needs the function must have a high-speed daughtercard installed. Different high-speed daughtercards cannot be mixed in the same BladeCenter chassis.
A 10 Gigabit Ethernet switch module was available from BLADE Network Technologies. This allowed 10 Gbit/s connection to each blade, and to outside the BladeCenter.
There are several InfiniBand options:
A high-speed InfiniBand 4X SDR switch module from . This allows IB 4X connectivity to each blade. Externally the switch has two IB 4X ports and two IB 12X ports. The 12X ports can be split to three 4X ports, providing a total of eight IB 4X ports or a theoretical bandwidth of 80 Gbit. Internally between the blades, the switch is non-blocking.
A High-speed
pass-through module to directly connect the blades to an external
switch. This pass-though module is compatible with both SDR and DDR
A high-speed InfiniBand 4X QDR switch module from Voltaire (later acquired by ). This allows full IB 4X QDR connectivity to each blade. Externally the switch has 16 QSFP ports, all 4X QDR capable.
was implemented with IBM BladeCenter components, from 2008 through 2013
employ the IBM BladeCenter. This includes
(the two most powerful supercomputers of ) and 6 small supercomputers.
Supercomputing Center employs the IBM BladeCenter
uses IBM BladeCenter to render
and other films
Kunert, Paul (23 January 2014). . channelregister.co.uk. The Register 2014.
. Archived from
on August 16, .
. IBM. . Archived from
. IBM. . Archived from
. IBM. . Archived from
. IBM. . Archived from
. IBM. . Archived from
. Themis Computer. Archived from
on June 5, .
. IBM. . Archived from
Ken Koch (March 13, 2008).
Montoya, Susan (March 30, 2013). . The Associated Press 2013.
Wikimedia Commons has media related to .在西汉-汉西词典中发现10个解释错误,并通过审核,将获赠《西语助手》授权一个
添加笔记:
<div id="correct" title="在西汉-汉西词典中发现10个解释错误,并通过审核,将获赠《西语助手》授权一个">有奖纠错
La empresa tiene su sede en Madrid.公司的总部在马德里。Estudia en una escuela de Madrid.他在马德里一所学校上学。Mi estancia en Madrid es circunstancial.我在马德里逗留的时间要看情况而定。Madrid
es una ciudad de estabilidad, progreso.马德里是一个稳定的进步的城市La Estatua del Oso y el Madro?o es un símblo de Madrid.熊和草莓树雕塑是马德里的一个标志。Esta facultad es dependiente de la Universidad de Madrid.这个系是属于马德里大学的。El escritor posa en su casa de Madrid.那位作家在他马德里的家中休息。Se doctora en una universidad de Madrid.她在马德里一所大学获得博士学位。Que en Madrid no hay playa es una evidencia.在马德里没有海滩是毋庸置疑的Madrid se ha especializado en turismo cultural, de negocios y ferias马德里专注旅游文化,商业和贸易展览。Este fin de semana hay una muestra de artesanía en Madrid.这周末马德里有一个工艺品展。--Ah,bien,buenas noches y bienvenida a madrid.晚上好,欢迎来到马德里Bien,buenas noches y bienvenida a madrid. 晚上好,欢迎来到马德里。Madrid es muy moderna con edificios muy chulos y muy nuevos.现代化的马德里拥有许多崭新而精美的建筑Bienvenida a Madrid.Me llamo Antonio Jirente.欢迎来到马德里。我叫安多尼奥·吉人特Madrid es una ciudad fantástica, es una ciudad muy bonita,
genial.马德里是一个非常美妙的城市,很美丽,很令人愉快Madrid es una ciudad fantástica, monumental, genial.马德里是一个非常美妙的城市,很宏伟,令人愉快。Buenos días,me llamo Alberto y llamo desde Madrid.你好,我叫Alberto,我从马德里打来。Madrid es una ciudad bonita y la Puerta del Sol es un sitio más famoso en Madrid.马德里是一个很美妙的城市,其中太阳门是很有名的景点。El cuadro de Esopo fue pintado por Velázquez y se conserva en el Museo del Prado de Madrid.这幅伊索画为委拉斯凯兹所创,现今保存在马德里普拉多博物馆。
关注我们的微信
下载手机客户端
赞助商链接
《西班牙语助手》是最专业的西语学习软件。提供了完整详尽的西汉-汉西词典、西语变位参考、西语百科全书。是西语学习者必备的工具。www.esdict.cn
在西班牙语课堂快速找到适合自己的西班牙语学习课程www.esdict.cn/course
如果您希望在《西班牙语助手》网站上放置宣传广告,可以联系我们。www.esdict.cn
提供大量西语阅读听力资源的免费西语学习站点sp.tingroom.comFrom Wikipedia, the free encyclopedia
Supermicro SBI-7228R-T2X blade server, containing two dual-CPU server nodes
A blade server is a stripped-down
optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a .
Unlike a rack-mount server, a blade server needs a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure, form a blade system. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.
In a standard server-rack configuration, one rack unit or —19 inches (480&#160;mm) wide and 1.75 inches (44&#160;mm) tall—defines the minimum possible size of any equipment. The principal benefit and justification of blade computing relates to lifting this restriction so as to reduce size requirements. The most common computer rack
is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation. As of &#160; 2014, densities of up to 180 servers per blade system (or 1440 servers per rack) are achievable with blade systems.
Enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them among the blade computers, the overall utilization becomes higher. The specifics of which services are provided varies by vendor.
BladeSystem c7000 enclosure (populated with 16 blades), with two 3U UPS units below
Computers operate over a range of DC voltages, but utilities deliver power as , and at higher voltages than required within computers. Converting this current requires one or more
(or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers may have redundant power supplies, again adding to the bulk and heat output of the design.
The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a dedicated separate PSU supplying DC to multiple enclosures. This setup reduces the number of PSUs required to provide a resilient power supply.
The popularity of blade servers, and their own appetite for power, has led to an increase in the number of rack-mountable
(or UPS) units, including units targeted specifically towards blade servers (such as the ).
During operation, electrical and mechanical components produce heat, which a system must dissipate to ensure the proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by using .
A frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade-enclosures feature variable-speed fans and control logic, or even
that adjust to meet the system's cooling requirements.
At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack mount servers.
Blade servers generally include integrated or optional
storage systems or
to combine storage and data via one
interface. In many blades at least one interface is embedded on the motherboard and extra interfaces can be added using .
A blade enclosure can provide individual external ports to which each network interface on a blade will connect. Alternatively, a blade enclosure can aggregate network interfaces into interconnect devices (such as switches) built into the blade enclosure or in .
While computers typically use hard disks to store operating systems, applications and data, these are not necessarily required locally. Many storage connection methods (e.g. , , , ,
and ) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through .
The ability to boot the blade from a
(SAN) allows for an entirely disk-free blade, an example of which implementation is the .
Since blade enclosures provide a standard method for delivering basic services to computer devices, other types of devices can also utilize blade enclosures. Blades providing switching, routing, storage, SAN and fibre-channel access can slot into the enclosure to provide these services to all members of the enclosure.
Systems administrators can use storage blades where a requirement exists for additional local storage.
supercomputer cabinet with 48 blades, each containing 4 nodes with 2 CPUs each
Blade servers function well for specific purposes such as , , and . Individual blades are typically . As users deal with larger and more diverse workloads, they add more processing power, memory and I/O bandwidth to blade servers. Although blade server technology in theory allows for open, cross-vendor system, most users buy modules, enclosures,
and management tools from the same vendor.
Eventual standardization of the technology might result in more c as of 2009 increasing numbers of third-party software vendors have started to enter this growing field.
Blade servers do not, however, provide the answer to every computing problem. One can view them as a form of productized
that borrows from
packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from the
problems that affect large conventional server farms.
Developers first placed complete
on cards and packaged them in standard
in the 1970s, soon after the introduction of 8-bit . This architecture was used in the industrial
industry as an alternative to -based control systems. Early models stored programs in
and were limited to a single function with a small .
architecture (ca. 1981) defined a computer interface which included implementation of a board-level computer installed in a chassis backplane with multiple slots for pluggable boards to provide I/O, memory, or additional computing.
In the 1990s, the PCI Industrial Computer Manufacturers Group
developed a chassis/blade structure for the then emerging Peripheral Component Interconnect bus
which is called . Common among these chassis-based computers was the fact that the entire chassis was a single system. While a chassis might include multiple computing elements to provide the desired level of performance and redundancy, there was always one master board in charge, coordinating the operation of the entire system.
PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane. The PICMG 2.16 CompactPCI Packet Switching Backplane specification was adopted in Sept 2001. This provided the first open architecture for a multi-server chassis. PICMG followed with the larger and more feature-rich
specification, targeting the telecom industry's need for a
and dense computing platform with extended product life (10+ years). While AdvancedTCA system and boards typically sell for higher prices than blade servers, AdvancedTCA promote them for
customers.
The first commercialized blade server architecture[] was invented by
and , and their patent () was assigned to Houston-based . RLX, which consisted primarily of former
employees, including Hipp and Kirkeby, shipped its first commercial blade server in 2001. RLX was acquired by
The name blade server appeared when a card included the processor, memory, I/O and non-volatile program storage ( or small (s)). This allowed manufacturers to package a complete server, with its operating system and applications, on a single card / board / blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. In addition to the most obvious benefit of this packaging (less space consumption), additional efficiency benefits have become clear in power, cooling, management, and networking due to the pooling or sharing of common infrastructure to support the entire chassis, rather than providing each of these on a per server box basis.
In 2011, research firm
identified the major players in the blade market as , , , and . Other companies selling blade servers include , , , , , ,
(hybrid blade), Cirrascale and .
Cisco UCS blade servers in a chassis
Though independent professional computer manufacturers such as
offer blade servers, the market is dominated by large public companies such as , which had 40% share by
in the first quarter of 2014. The remaining prominent brands in the blade server market are ,
and , though the latter sold its
business to
In 2009, Cisco announced blades in its
product line, consisting of 6U high chassis, up to 8 blade servers in each chassis. It has a heavily modified
switch, rebranded as a
interconnect, and management software for the whole system. HP's line consists of two chassis models, the c3000 which holds up to 8 half-height
line blades (also available in tower form), and the c7000 () which holds up to 16 half-height ProLiant blades. 's product, the
is a 10U modular enclosure and holds up to 16 half-height
blade servers or 32 quarter-height blades.
(PDF). Enterasys Networks, Inc. 2011.
. www.v3.co.uk. 9 Dec 2013.
(dead link, document removed)
October 13, 2007, at the .
The Register
& , "High density web server chassis system and method", published , issued ,
assigned to RLX Technologies
. ARN. October 8, 2001.
. www.informationweek.com. October 3, 2005.[]
(Press release). IDC. . Archived from
. IBM.com 2014.
. Press release. March 16, 2009. Archived from
on March 21, .
Wikimedia Commons has media related to .
: Hidden categories:

我要回帖

更多关于 n 1是什么意思 的文章

 

随机推荐