Blackout: who should be in the dock?

    0
    158

    Questions remain over the events surrounding the major blackout on 9 August. Ian Bitterlin gives his views on where the blame lies… 

    The partial blackout in the UK on 9 August that cut-off one million consumers and triggered chaos on the railway network is now getting lots of folks excited and the government is petulantly organising an enquiry involving National Grid (the energy system operator) and all of the DNOs, especially Ørsted and RWE, the owner/operators of the two power plants that appeared to have triggered the event – Hornsea, a North Sea wind farm; and Little Barford, Bedfordshire, a 740MW CCGT (combined cycle gas turbine) plant, respectively.

    It was a windy day in the North Sea with a relatively low (and steady) national load demand of 28.2GW from 09:00 leading up to the first event – Little Barford tripping off load – at 16:52 when it was 29.5GW. The second event – Hornsea unloading 737MW – occurred just after Little Barford. Ten minutes after these events the national average load had by dropped 100MW – very probably due to some of the, now stationary, electrified train network.

    To be pedantic, there were three reported individual events that the initial ESO report linked as one overall event, the first being a lightning strike. On the face of it, with the events being separated by over 100 miles, I thought that linking them as ‘one’ was ridiculous, but I concluded that it was more of a ‘system’ failure than two individual power station failures, so one event is the best way of looking at it. 

    The lightning strike, which some commentators have labelled a red herring, probably happened; it was a warm 23°C humid August evening with 200mm of rain that day and with numerous lightning events as usual but, regardless, ‘something’ had to trigger Little Barford to trip and it is more important to focus on why that mattered and why it caused Hornsea to come out in sympathy. 

    Lightning is always a useful excuse that helps God to take some blame but strokes happen all the time. Usually, they only cause a section of the inter-meshed grid to trip and a customer group to lose voltage for three seconds or so, until the auto-reclosure system kicks in – rather than causing a whole station to trip offline. More detail might be helpful to understand the Little Barford trip, therefore.

    That day the grid was being fed with a very creditable fuel-mix of 46% renewables (31%, 8.7GW, of wind and 13%, 3.8GW, of solar-PV, although the PV was falling as the evening drew on), only 1.7% from coal, 22% from nuclear and 29% from natural gas.

    The sequence of contributory events is then a matter of some conjecture, supported by the preliminary ESO report of 20 August: Little Barford tripped off 244MW and reduced the online generation capacity by 0.8%. This would have produced a rapid rise in voltage and dip in frequency associated with a fast rate-of-change of that frequency (ROCOF) and, I think importantly, increased the percentage of renewables feeding the grid. 

    Sometime around here, another 500MW (1.8% of system load) of embedded generation tripped offline as the grid voltage and its frequency fluctuated. As the frequency dropped towards the safety limit of 48.8Hz, Hornsea – incapable of ‘increasing’ the wind strength – reduced generation capacity by 763GW (2.7%), leaving just 62MW generating. Little Barford’s steam generator then shed another 244MW (0.8%) as its systems automatically reacted to the system frequency alarms. 

    In a matter of seconds, rather than minutes, the combined loss of capacity was about 6% and the 94% capacity that was still connected did not have the overload capability to maintain the system frequency within safe limits, resulting in the ESO automatically instructing DNOs to cut off load and reduce demand. There is some evidence that DSR and emergency frequency support was also called for by the ESO but, by the time it tried to work, 6% of the load had been shed and the frequency bounced back to 50Hz, leaving the emergency systems with nothing to do. Some, allegedly, tried to take load rather than support it. 

    I haven’t used the term ‘spinning reserve’ yet but the problem, once 6% was lost, was that there was not enough safety margin in the system (of capacity versus demand) to cope with the dynamic behaviour. This was aggravated by the high proportion of wind and solar that was running at the initial event – they are likely to be run at full capacity all the time that the wind blows and the sky is bright; this is a very different form of ‘spinning reserve’ as we know it from steam turbines and an inflexible response to load change.

    It was also certainly aggravated by the fact that the grid is currently undergoing an upgrade to the protection settings to enable the usage of high levels of intermittent renewables that are expected in the future – witnessed by an odd press release a few weeks ago announcing that the grid would be capable of ‘100% zero-carbon’ distribution by 2020, which means, I inferred at the time, that the grid is not ready yet. Maybe 46% variable/intermittent sources are the safe limit? There was a Danish paper (they are very often >40% wind powered) a few years back that questioned if >50% wind was ever possible while maintaining grid stability.

    When writing this piece on 30 August, I checked the UK grid at 12.30pm and found 48% renewables (32% wind, 16% solar), our usual 19% nuclear, 20% CCGT, 7% biomass, no coal and 3% being imported via the interconnects, mainly French nuclear; almost the same fuel mix as 9 August, with a heavy reliance on low inertia intermittent sources. 

    We should note two interesting points about the power stations involved: Hornsea is only at the start of its life, coming on stream in February this year with 28 turbines installed out of a planned 74 and it is on track to become one of the largest wind installations in the world at 6GWpeak – so this problem can only get more severe. 

    On the other hand, on 9 August, Little Barford needed a helping hand like a grid-scale battery and, somewhat coincidentally, it nearly had one. A trial 12MWe polysulphide bromide flow battery had been installed but it failed to demonstrate that it could be scaled up and the project was abandoned, since 12MWe is no good to anyone in a 740MWe plant.

    It is interesting to note that the 6% drop in capacity was matched by approximately 6% of consumer disconnections. I wonder how they chose the victims?

    But, when it comes down to the nitty-gritty, it is the government and Ofgen that should be the prime suspects and in the dock. Why? Well, the blackout was caused by the unusual event of two power stations tripping offline at nearly the same moment in time. The grid automatically shut off load to protect the system (and the consumer) from dangerous voltage swings and low frequency. The fact that the second station was an offshore wind farm when the wind was gusting strongly could have added to the stability problem. 

    But why did it trip the grid? Because the system at that instant had (and always has?) Insufficient spinning reserve so that the private owner/operators can save costs and increase profits for their stakeholders. The government (past, but the present bunch of any political persuasion would probably do the same thing) sold off the electrical utility because it was inefficient and costly to run; no doubt also to bolster a flagging economy at the time.

    However, it was undervalued and sold off very cheaply compared with the investment made with tax-payers’ money over many previous decades. It was ‘inefficient’ due to ‘too many’ staff, ‘too much’ (ie the right amount of) preventive maintenance and ‘too much’ spinning reserve. It was too conservative… 

    Well, the maintenance investment has gone down and the spinning reserve cut to below the bone when the wind blows hard – and we all saw the result. It is a rare occurrence but is likely to occur a little more frequently as we increase the proportion of intermittent renewables.

    If we want a utility that doesn’t fail, then the management of that utility should be based on engineering principles, not cost reduction. To get the highest possible contribution from intermittent renewable sources and move to a zero-carbon grid then we need either massive storage facilities or >40% nuclear generation to replace the gas, which has largely displaced coal already, or some combination of the two.

    There were knock-on events that deserve comment: Ipswich Hospital reported that one out of 11 generators failed to support them but let us immediately remind ourselves that hospitals are not dependent on generators (or UPS) in life-safety areas and, in this case, only out-patients, X-ray and pathology were affected. A later statement blamed one switchboard auxiliary battery.

    However, it is important that generator system testing (in hospitals, just as in data centres) is an essential feature of the planned maintenance routines. 

    Unlike data centres, for hospitals the testing is mandated by regulations according to health technical memorandum HTM 06-01, where it says, in section 17.64, ‘…include tests on the protection relays, battery units (on load), auxiliary relays, timer relays, coils, terminations and linkages forming the open/close mechanism…’ 

    The hospital claimed to have followed this lengthy and detailed HTM but does not appear to have sufficiently tested the changeover switchgear battery/charger ‘on-load’, since, according to its statement, its relied upon the ‘recommended life’ of the batteries being OK.

    In data centres it is highly recommended to test generators on-load, including the switchgear changeover, every month since utility ‘failure’ is regarded as a ‘normal event’ and we know, from bitter experience, that batteries are a plant item whose service life is only a proportion of their design life, usually 80% at best but much lower if neglected. So blaming the utility for the failure of the hospital plant is not a reasonable argument and, more likely, a cut in maintenance and testing is a direct consequence of government cuts in NHS funding. Again, is the government to blame?

    Similarly, the electrified train network needs to review its own power strategy rather than rely entirely on the grid. It is a remarkably bad engineering design that results in 60 trains (Govia Thameslink Railway in the South East) needing an engineer sent to manually ‘reset’ each one after a power cut.

    Network Rail needs to understand that electric trains are much less autonomous than diesel-electric and that their control systems need backup power, just like a critical infrastructure. There were 371 service cancellations and 220 partially cancelled after a 15-minute blackout. Maybe it is the same problem of privatisation reducing quality, to reduce costs and increase profits?

    The role of DSR (by its very nature off-line) was clearly of no use in this type of unplanned and ‘instant’ event. If the grid trips, DSR cannot connect to it. In fact, if the grid is not reasonably stable then DSR in the form of standby generators will trip off-line to protect itself. 

    If I was Ofgem I would be asking one question at a time and the first one would be ‘where was the output from Dinorwig’ – our pumped storage 1.8GW system (coincidentally 6% of the 9 August load loss), that was designed to cover for such events of demand peaks or supply troughs?

    Now a governmental review? What a waste of tax-payers money; getting experts to explain to a bunch of amateurs that Ofgem and National Grid should control (command) the DNOs to ensure that enough on-line (not DSR) dynamic reserve capacity is in place, even if it costs the DNOs money. 

    It is the privatisation of the utility (and the split of responsibility into parts, each of which is driven by profit, with clearly insufficient system responsibility) that lies at the heart of this blackout, and the others to come. Renationalisation is probably out of the question, but we certainly need a safe system and we have that. 

    Maybe the price of electrical safety in a low-carbon future is a utility system that trips off-line from time to time… but that can produce other safety issues if backup is missing.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here