Last April, SCTE-ISBE hooked up with Comcast and Liberty Global to set-up a contest aiming to address the ‘elephant in the room’, energy consumption in broadband and especially in Hybrid Fibre Cable (HFC) networks. The contest was set to collect the brightest ideas and innovations across the cable industry to overcome the problem of how to manage and optimize the energy consumption in broadband networks.
It was self-evident for Teleste to get involved as soon as the contest was out. Indeed, we fully concur on this energy consumption issue with the ‘godfathers’ of the Adaptive Power Challenge and are truly excited to be engaged in this crusade.
There is inevitable evidence that the energy consumption in broadband networks is significant. One of the most comprehensive studies was conducted by SCTE: www.scte.org
The study points out that between 73% and 83% of cable network’s overall energy is consumed by hubs and headends and the access network power supplies which power the active equipment on the HFC network. The split of power consumption by cable operator is depicted in the ‘Energy pyramid’:
The study further emphasizes that the savings can be remarkable, as much as 20%, in the energy costs. More importantly, from the end users/consumers perspective, the savings can be done without compromising the service quality.
Teleste has been investigating on how performance optimization correlates with power consumption for quite some time. Major deal of the energy consumption takes place in outside plant devices, specifically in their amplifier modules, which are running on full throttle even though it’s not necessary. It is like leaving a water tap running forever. Makes no sense, right?
Our proposal in the Adaptive Power Challenge is based on the finding that although outside plant devices are designed and operated to fulfil their specification at full 1.2 GHz load, the actual capacity used in the network is seldom that heavy. This means that the amplifier module doesn’t have to run at full power all the time. Actually, it can be operated with lower bias current, which results into lower power consumption, when maximal capacity is not in use. Further, the end-user quality of experience stays intact.
Today, the most – if not all – of the 1.2 GHz outside plant devices are using lots of unnecessary power since there is practically no capacity used above 1.0 GHz. We also believe that in the future, the full capacity up to the 1.2 GHz will not be totally exploited, which allows prolonged power saving opportunity.
There are several ways to implement this power saving feature to CATV network equipment, all of which are based on the adjustment of bias current in the amplifier module.
Our proposal consists of three novel methods to do that:
1. Remotely adjusted performance levels:
CATV network equipment (i.e., optical nodes and amplifier stations), can have two or more pre-defined power save states which can be remotely controlled via simple extension of current unidirectional communication method, which is used to remotely command ingress switches in the amplifier station (Refers to Return Ingress Switch (RIS) in the ‘LGI vocabulary’). Initially, RIS was developed to enable cost-efficient unidirectional remote management of ingress switches. However, RIS software can be further enhanced with minor modifications to support also the remote management of the power save mode in the amplifier module. RIS enables the changing of power save mode, e.g., within one hub area, with a click of a button at the back office. Power save modes can be set according to the capacity used.
2. Autonomous performance adjustment:
CATV network equipment can have built-in RF power measuring function, which is used to measure total downstream RF power. The capacity that is in use in the network is determined based on the RF measurements carried out with the RF power measuring function. The amplifier module’s power consumption is adjusted based on these measurements provided by the RF power function. This method is a ‘stand-alone’ automatic function and hence no remote access is needed.
3. Load-based performance adjustment:
Distributed Access Architecture (DAA) node is aware of the capacity used in the network because it generates digital TV and DOCSIS data content by modulating it into an RF signal. This capacity information can be used to adjust the amplifier module power consumption to meet the requirements of the capacity used. This method is a ‘stand-alone’ automatic function and hence no remote access is needed. In case of RF overlay deployment, an additional tuner module to measure broadcast TV part should be considered. We have such tuner module available off-the-shelf. It should be noted that this feature does not require complex (2nd generation) Digital Pre-distortion (DPD) implementation.
The three methods, described above, can save up to 30% of the power consumed by the amplifier module. This can reduce the total network power consumption by up to 20%. That saves a lot of money, energy and nature, all without truck rolls. This does make sense, right?