Evaluating the benefits of alternative leak detection programs

10 New technologies have the potential to reduce the cost of leak detection and repair (LDAR) 11 for producers of all sizes through smart LDAR program design, the right combination of 12 technologies, and by collaboration between producers within the same geographic area. This 13 potential was examined in an extensive study by conducting multiple simulations using the 14 Arolytics AROfemp model to evaluate the impact of alternative technologies on the cost and 15 effectiveness of LDAR. In this study, AROfemp simulated 380 different alternative LDAR 16 programs, each with 1500 Monte Carlo simulations to incorporate the random nature of 17 methane leaks. Each simulation incorporated asset information of real producers in Alberta 18 different combinations of methane detection technologies (truck, airplane, and drone), various 19 survey timings, and different thresholds for triggering follow-up surveys with a gas imaging 20 camera for leak localization before repair. Our results showed that alternative monitoring 21 programs can reduce the cost of finding methane leaks compared to traditional LDAR 22 programs. This is valid both for companies acting on their own and those collaborating 23 to conduct alternative LDAR programs together. Cost reductions for alternative LDAR 24 programs can, in some cases, exceed 50%. However, results were strongly impacted by the 25 choice of technology, facility type, as well as program design and logistics. For multi-producer 26 collaborations, the logistics of follow-up surveys are important since alternative technology 27 surveys can be much faster than traditional ground-based camera surveys. To avoid delays in 28 leak localization and subsequent leak repairs, enough ground crews must be available and 29 deployed in timely manner. Alternative LDAR has the potential to reduce costs and/or 30 achieve deeper methane emission reductions for all producers but is not a one-size fits all 31 solution, and programs that are successful for one producer cannot necessarily be replicated 32 for others. Collaboration between small producers has potential to address these barriers. 33 This paper is a non-peer reviewed preprint submitted to EarthArXiv


INTRODUCTION
The model was used to show Alt-FEMP scenarios in two geographic regions of Alberta  (Table 3).

127
In addition, Arolytics modelled four different Alt-FEMP types individually (Truck 2x, Drone 128 2x, Aerial 2x, Aerial 1x_Truck 1x) for each theoretical producer, as well as for the two multi-129 producer regions in Medicine Hat and Slave Lake. The four different Alt-FEMPs involved 130 various combinations and survey frequencies of aerial, truck, and drone methodologies (Table   131 3 Nine-follow-up combinations, as seen in Table 3, were simulated in order to obtain a range the baseline program + the default program). We took the same approach for multi-producer 143 regions; we modelled 38 programs for Slave Lake, and 38 programs for Medicine Hat. In 144 total, this resulted in model results for 418 different FEMPs.

145
It was important to model many different combinations of follow-up percentages because the 146 follow-up parameter has a significant impact on emissions. Only leaks that are followed up for 147 localization can be repaired. Please note this project did not test the effectiveness of various 148 Alt-FEMPs, rather it tested the performance of programs under the chosen assumptions.

149
Numerous additional options for Alt-FEMPs exist that were not modelled in this study, 150 including different technology categories and work practices.

151
Cost assumptions used in this modelling are estimates only, and do not reflect the costs of 152 any one company or service provider. In order to cover a range of possible costs for each 153 methodology, we modelled both a low and high cost scenario for each program and region.

154
The low and high costs were defined by both public information, as well as discussions with 155 service providers directly (Table 4). It is probable that service providers who offer CH4 156 detection services will change the prices of their services to respond to market fluctuations.
Therefore, we expect that these costs will vary from real-life scenarios and implementations 158 of the programs modelled in this study. Multiple service providers offer OGI and alternative 159 detection technologies at various prices, and this modelling was used as an exercise to test 160 the impacts of different pricing assumptions.  Figure 1 shows that overall, modelled alternative programs resulted, on average~20% greater 164 reduction in CH 4 emissions (expressed in CO 2 equivalent with a GWP of 34) compared to     Medicine Hat This paper is a non-peer reviewed preprint submitted to EarthArXiv This paper is a non-peer reviewed preprint submitted to EarthArXiv programs. This means that programs successful for one producer can not necessarily be 230 replicated for others, and alternative programs may need to be tailored to a producers' 231 or region's operations and facilities.

232
• If alternative LDAR programs prove successful, smaller producers, with limited resources, 233 may be unable to undertake these programs. Collaboration between small producers 234 could reduce barriers, but policies and support may be needed to level the playing field 235 and provide equal access to smaller producers. • Model estimates of total CH 4 emissions are not directly sensitive to changes in the input survey times.

19
Optimization/emission reduction improvement is possible when survey time is reduced as screening and 20 follow-up campaigns are conducted faster, and therefore repairs occur sooner.

21
• Model estimates of total program costs are linearly and equally related (1:1) to changes in the input  Below is the analysis of the model's sensitivity to the survey time input parameter. Three trials for two 28 different regions were modelled.

29
Each trial included a default and a Truck 2x program: 30 S1 SENSITIVITY TESTS  3. The survey time value -25% (Default and Truck 2x programs).

33
The sensitivity results suggest the following for both default and Truck 2x programs.

48
• Outputted program costs are linearly related to input survey time. This is true for both regions. All 49 programs with a 25% reduction in survey time yielded a 25% reduction in program cost. The same is 50 true for a 25% increase in surveyTime.

51
These highlights are proven for both a default program and a Truck 2x (with followup 20, 50, 80%) program, 52 which means that these sensitivities can be applied to all current programs in this study.

53
The model's sensitivity to input survey costs is known to be linear and equal (1:1), as it is calculated in a 54 simple multiplication of (surveys completed (screening and OGI follow-up) X survey costs). Arolytics has developed this model in response to Canadian federal and provincial methane regulations that 60 came into effect in January 2020. These regulations require oil and gas producers to survey and/or screen practices.

79
In order of preference, the Arolytics model uses methane emission and repair data from a) previous company 80 leak detection data, b) the region to be modelled, or c) a nearby region with a similar oil and gas production 81 style. As oil and gas producers begin conducting LDAR programs in 2020, increasingly large amounts of

146
The amount of time it takes to survey a site is dependent on the technology type, its limitations, the service  with a more precise technology before repair can take place).

170
The Arolytics model classifies technologies as either "site-scale" or "equipment-scale". This classification is 171 based on both information provided by the service provider about the technologies' capabilities, as well as 172 the producers' intended methodology for implementing the technology. Arolytics confirms with service providers that the technologies will only be deployed in conditions that meet 186 the technology performance requirements and that "weather days" will not impact measurement costs.

S2.3 Model Set-Up
188 Before running the model, the user must define an annual LDAR program as a series of methane measurement 189 or detection "campaigns" (example in Table 1). Each campaign constitutes a technology being sent to a 190 selection of upstream sites for leak detection. Typically, all infrastructure included in a campaign is only 191 surveyed or screened once. If infrastructure needs to be surveyed more than once throughout the year, more 192 campaigns are included in the LDAR program.

193
The LDAR program to be modeled can be adjusted to incorporate a variety of scenarios, including baseline 194 (no LDAR), default (the regulatory default requirements for the region), or any type of alternative LDAR 195 program.

197
The user can choose which technologies they wish to model from a list of technology options that Arolytics has 198 compiled for the region of interest. Technology options include unique combinations of both the technology 199 type and the service provider who will implement the technology. Each technology chosen to include in the 200 program requires a new field measurement campaign, referred to as a "campaign".

202
For each campaign, the user must choose a "campaign type". Campaign types include: "survey", "flag", and 203 "follow-up", defined below. • Survey: This campaign type signifies that the chosen technology is the main method of methane 205 detection that will be used at each well or facility it is sent to. Infrastructure found to be emitting 206 during a "survey" campaign will be repaired. Typically, OGI campaigns are classified as "survey" 207 campaigns.

208
• Flag: This campaign type is typically chosen for "alternative" technology types that are unable to 209 pin-point exact leak locations. A "flag" campaign indicates that any leaks identified with the chosen 210 technology must be followed up by a more detailed technology to localize the leak and/or quantify 211 the emission rate. Sites found to be emitting during a "flag" campaign will not be repaired until a 212 "follow-up" campaign has taken place.

213
• Follow-Up: This campaign type is only used in conjunction with a "flag" campaign. During a "follow-up" 214 campaign, the chosen technology is only sent to sites that were screened during the corresponding "flag" 215 campaign and found to be emitting above a certain threshold (see Follow-Up Threshold below). Sites 216 found to be emitting during a "follow-up" campaign will be repaired.

218
For each campaign, the user must define the specific infrastructure locations that will be surveyed, screened, 219 or flagged. This approach provides the user with the flexibility to model more accurate LDAR programs.

220
For example, some producers may want to experiment with using an alternative technology to identify leaks 221 at more remote sites, while still implementing a default regulatory LDAR approach at sites with better 222 accessibility. The follow-up threshold defines which sites identified as leaking during a "flag" campaign will be followed-up 229 by a more detailed technology to localize the leak for repair. This threshold is either an emission rate (in 230 m3/day), or a portion of the highest emitting sites (for example, the 10% of sites found to be emitting the 231 most). The follow-up threshold can be defined separately for each "flag" campaign included in the LDAR 232 program.

234
Input parameters are compiled uniquely for each application of the model because each producer, geographical 235 region, and production type are subject to varying methane emission characteristics.

236
Input parameters are most accurate for producers who have already conducted routine LDAR programs over 237 multiple years, as these datasets provide insights into the producers' emission profiles and repair practices.

238
As routine LDAR programs are conducted by all Canadian oil and gas producers throughout 2020, the rigor 239 of the model input parameters will improve because real, and relevant, datasets can be used to calculate 240 region and company-specific parameters.

242
The Leak Production Rate (LPR) is the probability that a given site will begin leaking on any given day.

243
When possible, the LPR is calculated uniquely for each infrastructure type. In cases where a producer has 244 already conducted routine LDAR programs, the LPR is derived from these datasets. In cases where there has 245 been no rigorous LDAR programs to-date, the LPR is calculated from previous emission studies in similar 246 areas and / or oil and gas development types.

247
The likelihood of leaks appearing or re-appearing, is an important characteristic to consider when modeling 248 emission rates over time. To-date, LPR is loosely defined due to a lack of continuous and / or repeated

258
In cases where a producer has already conducted routine LDAR programs, the LDP is derived from these 259 datasets. In cases where there has been no rigorous LDAR programs to-date, the LDP is derived from 260 previous emission studies in similar areas and / or oil and gas development types.

262
The model assumes that leaks may be repaired by the producer as soon as one day after they are detected.

263
However, there is a limit to how many repairs a producer can reliably complete in one day. The number of 264 repairs that can be performed in one day in the model is defined as "Repairs Per Day". This value is derived 265 from conversations with producers about their operational practices.
Natural Repair Rate (NRR) is the probability that a leak will be repaired during normal operations, and 268 not as a part of an LDAR program. The NRR is calculated from previous LDAR datasets when possible, 269 or otherwise it is estimated from best available data. Typically, the NRR is low, and has negligible impact 270 compared to other parameters.

272
This section defines each step of the methane simulation as it occurs in the model (example in Figure 1).

273
The model  For each day of the simulation, certain non-leaking infrastructure is randomly assigned a "leaking" status 290 based on the LPR. The "leaking" status persists through every day of the simulation until the leak gets 291 repaired (either naturally, or as part of the LDAR program). Each piece of infrastructure with a "leaking" 292 status is randomly assigned an emission rate from the corresponding LDP.  Once the sites to be surveyed on the current day are selected, the probability of leak detection is applied to 301 identify where leaks might actually be detected. Finally, of the sites where leaks might be detected, if the 302 selected infrastructure is leaking at an emission rate greater than the MDL of the technology being used in 303 the campaign, the infrastructure is flagged as follows: and infrastructure that are found to be leaking above the MDL are identified as "detected".

306
• For "flag" campaigns: Infrastructure that are selected for the current day are identified as "visited", and 307 infrastructure that are found to be leaking above the MDL and the follow-up threshold are identified as 308 "requiring follow-up".

309
• For "follow-up" campaigns: Infrastructure that are selected for the current day are identified as 310 "surveyed", and infrastructure found to be leaking above the technology MDL are tagged as "detected".

311
This process is completed for each day of the year that has an active LDAR campaign.

313
To simulate natural repairs, leaking infrastructure are randomly selected to be repaired according to the 314 NRR.

315
Directly after natural repairs, these infrastructure locations are no longer considered leaking for the current 316 day. However, on the following day of the simulation, the newly repaired site is just as at risk of starting to 317 leak as all other non-leaking sites. This process could change as we collect more comprehensive data about 318 the probability of leaks reoccurring at various sites.

320
For each day of the simulation, leaking infrastructure that has been detected on a "survey" or "follow-up" 321 campaign can be repaired. To simulate repairs that are part of the LDAR program, all leaking infrastructure 322 that was detected on a campaign is randomly selected until the maximum number of repairs per day has 323 been reached. All leaks at the selected infrastructure locations are then repaired.

324
Directly after repairs, these infrastructure locations are no longer considered leaking for the current day.

325
However, on the following day of the simulation, the newly repaired site is just as at risk of starting to leak 326 as all other non-leaking sites. This process could change as we collect more comprehensive data about the 327 probability of leaks reoccurring at various sites.  The model can be used to iteratively test combinations of various input parameters mentioned above. For 335 example, the user may wish to test various follow-up thresholds with technology types. In this case, the model 336 runs every possible combination of parameters and produces a summary of the most effective programs.

338
It should be noted that results of the model are not guaranteed to reflect what may occur when these 339 LDAR programs are implemented in reality. It is the sole responsibility of the producer to ensure LDAR 340 programs are completed as prescribed. Arolytics has no control over the implementation of any proposed 341 LDAR programs, and therefore does not guarantee that the LDAR program will result in methane emission 342 reductions equivalent to or less than default LDAR programs.
have not been implemented, or in developments where Arolytics is not able to access emission datasets from 345 similar regions / production styles.

346
Depending on the level of methane emission data available for each region to be modelled, assumptions may 347 be made for various input parameters. Arolytics will disclose all assumptions to the producer in the final 348 project report, and we encourage these assumptions to be additionally disclosed to the regulator upon the 349 submission of an alternative LDAR program application.

350
It is also important to note that the model is continually being refined and the above process is subject to 351 change. Any changes in process will be identified upon completion of the modeling work.

353
Arolytics understands the importance of being transparent about methods used to model alternative LDAR 354 emission reductions. On reasonable request, Arolytics will disclose detailed descriptions of all processes used 355 to acquire and analyze emission datasets, as well as the model algorithms. As a for-profit business, Arolytics 356 reserves the right to withhold information about methods from parties who may be positioned as competitors 357 to our products and services. The information contained in this document is confidential, privileged, and 358 only for the intended recipient and may not be used, published or redistributed without the prior written 359 consent of Arolytics Incorporated.