Effectiveness of proximal intra-operative salvage Palmaz stent placement for endoleak during endovascular aneurysm repair

Hong Kong Med J 2016 Dec;22(6):538–45 | Epub 24 Oct 2016
DOI: 10.12809/hkmj154799
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Effectiveness of proximal intra-operative salvage Palmaz stent placement for endoleak during endovascular aneurysm repair
Y Law, MB, BS, FRCS (Edin); YC Chan, MB, BS, FRCS (Eng); Stephen WK Cheng, MB, BS, FRCS (Edin)
Division of Vascular and Endovascular Surgery, Department of Surgery, The University of Hong Kong, Queen Mary Hospital, Pokfulam, Hong Kong
 
This paper was presented in abstract form at the 15th Congress of Asian Society for Vascular Surgery (ASVS) Hong Kong, 5-7 September 2014, Hong Kong.
 
Corresponding author: Dr Y Law (lawyuksimpson@gmail.com)
 
 
 Full paper in PDF
 
Abstract
Introduction: The use of a proximal Palmaz stent is a well-recognised technique to treat proximal endoleak in endovascular aortic repair. This study aimed to report the effectiveness and safety of an intra-operative Palmaz stent for immediate type 1a endoleak in Hong Kong patients.
 
Methods: This case series was conducted at a tertiary hospital in Hong Kong. In a cohort of 494 patients who underwent infrarenal endovascular aortic repair from July 1999 to September 2015, 12 (2.4%) received an intra-operative proximal Palmaz stent for type 1a endoleak. Immediate and subsequent proximal endoleak on follow-up image was documented.
 
Results: Morphological review of the pre-repair aneurysm neck showed five conical, one funnel, five cylindrical and one undetermined short neck, with a median neck angle of 61 degrees (range, 19-109 degrees). Stent grafts used included seven Cook Zenith, one Cook Aorto-Uni-Iliac device, three Metronic Endurant, and one TriVascular Ovation. Eleven Palmaz stents were placed successfully as intended, but one of them was accidentally placed too low. Of the 12 type 1a endoleaks, postoperative imaging revealed immediate resolution of eight whilst four had improved. After a median follow-up of 16 (range, 1-59) months, none of the subsequent imaging showed a type 1a endoleak. The mean size of the aneurysm sac reduced from 7.4 cm preoperatively to 7.3 cm at 1 month, 6.9 cm at 6 months, 7.1 cm at 1 year, and 6.1 cm at 2 years postoperatively. None of these patients required aortic reintervention or had device-related early- or mid-term mortality. One patient required delayed iliac re-interventions for an occluded limb at 10 days post-surgery.
 
Conclusion: In our cohort, Palmaz stenting was effective and safe in securing proximal sealing and fixation.
 
 
New knowledge added by this study
  • Palmaz stenting is effective and safe as a salvage treatment of proximal endoleak during endovascular aortic repair (EVAR).
Implications for clinical practice or policy
  • Appropriate patient selection, meticulous preoperative planning, and diligent follow-up ensured the ultimate success of EVAR.
  • A Palmaz stent should always be readily available during EVAR especially for aneurysms with a hostile aortic neck.
 
 
Introduction
Hostile proximal infrarenal aortic neck is often considered one of the relative contra-indications for infrarenal endovascular aortic repair (EVAR), but large multicentre registries have shown that more EVARs have been performed worldwide beyond manufacturers’ indications.1 2 3 Data from the EUROSTAR Registry more than a decade ago looking at aneurysm morphology and procedural details of 2272 patients showed that perioperative type 1a endoleak was significantly associated with a larger angulated infrarenal neck with thrombus.4 Other morphological features included short aortic neck length, large sac diameter, severe angulation, and conical neck configuration.5 Endoleak is defined as the persistence of blood flow outside the lumen of an endovascular graft but within an aneurysm sac. Type 1a endoleak is a leak around the proximal graft attachment site. They are clinically important since they can lead to continual aneurysm enlargement and eventual rupture.6
 
Although the development of renal and mesenteric fenestration or branched stent graft technology to extend the landing zone more proximally may overcome type 1a endoleaks, these devices may not be ideal because of regulations, complexity, and manufacturing delays.7 In addition, contemporary published literature does not provide sufficiently strong evidence to warrant a change in treatment guidelines for juxtarenal or short-neck aneurysm.8
 
The use of a proximal giant Palmaz stent (Cordis Corp, Fremont [CA], US) is a well-recognised technique to manage a type 1a endoleak intra-operatively.9 10 11 12 A prophylactic Palmaz stent inserted for hostile neck has also been reported.13 14 15 The aim of this study was to report the effectiveness and safety of an intra-operative Palmaz stent for an immediate type 1a endoleak.
 
Methods
Patient selection
Patients with an infrarenal abdominal aortic aneurysm (AAA) who had undergone EVAR were retrospectively reviewed for the period 1 July 1999 to 30 September 2015. Data were extracted from prospectively collected computerised departmental database, supplemented by patient records. This study was done in accordance with the principles outlined in the Declaration of Helsinki. All patients had undergone a preoperative fine-cut computed tomographic (CT) scan and careful preoperative planning with a conclusion that EVAR was feasible. All our patients who underwent EVAR had preoperative three-dimensional aortic anatomy examined using the stationary TeraRecon Aquarius workstation (TeraRecon, San Mateo [CA], US) that rapidly manipulates all preoperative data sets in the Digital Imaging and Communications in Medicine, and examines the aortic morphology. These details would be difficult to appreciate on two-dimensional planar CT imaging, especially in those with inadequate short hostile neck. Those patients who were deemed unsuitable for infrarenal sealing would either undergo open repair or, more recently, fenestrated or adjunctive EVAR (such as chimney technique).
 
Cook Zenith (Cook Medical, Bloomington [IN], US), Medtronic Endurant (Medtronic Inc, Minneapolis [MN], US), and TriVascular Ovation (TriVascular Inc, Santa Rosa [CA], US) stent grafts were most commonly used. We sometimes extended the manufacturers’ indications of use to include difficult aortic necks. All operations were performed in our institution, a tertiary vascular referral centre, by experienced vascular specialists in a hybrid endovascular suite (Siemens Artis zee Multipurpose System; Siemens, Erlangen, Germany). Post-deployment balloon moldings with Coda balloon (Cook Medical, Bloomington [IN], US) and completion angiograms with power injection were performed in all patients.
 
Proximal Palmaz stent placement
In the event of proximal endoleak on completion angiogram, further balloon molding would be attempted. Persistent endoleak was managed with balloon expandable giant Palmaz stent P4014 (Cordis Corp). Our preference was for the Palmaz stent to be positioned at the infrarenal region over the fabric of the stent graft, so as not to jeopardise any future chance of proximal extension with a renal or mesenteric fenestrated cuff. They would be crimped on a Coda balloon (maximum inflated diameter, 40 mm) and railed through a stiff guidewire to the top neck.
 
Preoperative aneurysm neck morphological analysis
Cause of proximal endoleak was usually hostile neck. We focused on aortic infrarenal neck anatomy where the Palmaz stent was placed and exerted its effect. All preoperative CT scans were analysed. Measurement was automatically calculated by computer software Aquarius iNtuition Client version 4.4 (TeraRecon Inc), after a median centre line path (MCLP) was drawn. We used definitions of neck measurement as defined by Filis et al16:
(1) Neck length is calculated between the point of MCLP at the orifice of the lower renal artery and MCLP at the start of the aneurysm.
(2) Neck diameter is measured on the orthogonal cross-section of the neck, from the outer wall to the outer wall. It is measured at the lower renal artery level, 1 cm and 2 cm below.
(3) Neck angle is the angle between the axis of the neck (straight line between the MCLP of the aorta at the level of the orifice of the lower renal artery and the MCLP of the aorta at the start of the aneurysm) and the axis of the lumen of the aneurysm (straight line between the MCLP at the start of the aneurysm and the MCLP at the end of the aneurysm).
(4) Neck morphology (Fig 1):
(a) Funnel shape is defined as 20% decrease in neck area between the level of the lower renal artery and 2 cm below.
(b) Conical shape is defined as 20% increase in neck area between the level of the renal artery and 2 cm below.
(c) Cylindrical shape is defined between the above two.
(d) Shape is undetermined if neck length is less than 2 cm.
 

Figure 1. Illustration of neck morphology
 
Clinical and radiological outcomes
Immediate radiological result following Palmaz stent placement was reported. Any procedural mortality or morbidity was recorded. Our departmental protocol recommends postoperative imaging, either CT scan or duplex scan, at 1 to 3 months, then every 6 months for the first 2 years, and annually thereafter. These were supplemented by X-ray to detect any stent fracture or migration. Any endoleak detected during follow-up was reported as well as any alteration in sac size. Follow-up was dated to the most recent objective imaging available.
 
Results
Patient population
During the study period 1 July 1999 to 30 September 2015, a total of 842 AAA surgeries were performed, of which 320 (38%) were open repair and 522 (62%) were endovascular repair (28 fenestrated/branched EVAR and 494 infrarenal EVAR). In a cohort of 494 patients with infrarenal EVAR, 12 (2.4%) received an intra-operative proximal Palmaz stent for type 1a endoleak that was noticed on completion angiogram. No patients received prophylactic Palmaz stent for difficult neck anatomy.
 
Patient demographics are summarised in Table 1. The median age was 84 (range, 58-95) years. All had undergone elective surgery for asymptomatic AAA. The median AAA size was 7.6 cm (range, 5.0-9.4 cm). Seven patients received a Cook Zenith stent graft, one patient a Cook Aorto-Uni-Iliac device, three had Metronic Endurant stent grafts, and one had TriVascular Ovation stent graft. The occurrence of type 1a endoleak was more common in recent years (Table 2).
 

Table 1. Baseline characteristics of 12 patients
 

Table 2. Aneurysm neck morphology
 
Analysis of aneurysm neck morphology
Table 2 summarises the neck morphologies of our cohort. Morphological review of the pre-EVAR aneurysm neck showed five conical, one funnel, five cylindrical, and one undetermined short necks. The median neck angle was 61 degrees (range, 19-109 degrees). Use of the stent graft was outside of the manufacturer’s guidelines in six (50%) patients. Most patients had one or more features of hostile neck, rendering them at high risk of proximal endoleak.
 
Radiological outcomes
All 12 patients had persistent proximal type 1a endoleak after stent graft placement with standard balloon molding. Placement of a giant Palmaz stent in the infrarenal position was technically successful in all cases, although one Palmaz stent was placed too low and lodged in one of the iliac limbs. Nonetheless, it served its function well in correcting the proximal endoleak (Figs 2 and 3). Immediate resolution of the endoleak was achieved in eight (67%); whilst four (33%) had improved but persistent leak at completion of the procedure.
 

Figure 2. Abdominal aortic aneurysm of patient 4
(a) Preoperative computed tomographic (CT) scan showing a short and angulated neck; the patient underwent endovascular aortic repair. (c) Completion angiogram showing type 1a endoleak from the left side (arrowhead). (d) Palmaz stent being inserted (red arrows) and (e) proximal endoleak is resolved. (b) Postoperative CT scan showing no endoleak
 

Figure 3. Aneurysm of patient 10
(a) Preoperative computed tomographic (CT) scan showing an angulated neck. (b) Postoperative CT reconstruction. (c and d) X-ray scans showing the Palmaz stent was placed too low and became lodged in the proximal right limb (red arrows). Nonetheless the proximal part of the Palmaz stent served well to prevent proximal endoleak
 
Clinical outcomes
After a median follow-up of 16 (range, 1-59) months, no patient had a type 1a endoleak on subsequent imaging, either CT scan or duplex ultrasound scan. All patients had at least one postoperative CT scan. Patient 2 had a type 1c endoleak from an embolised left internal iliac artery that was managed conservatively. Patients 9 and 11 had a type 2 endoleak, also managed conservatively. All had shrinkage of sac size. Sac size of the aneurysms decreased from a mean of 7.4 cm pre-EVAR to 7.3 cm, 6.9 cm, 7.1 cm, and 6.1 cm at 1 month, 6 months, 1 year, and 2 years post-EVAR, respectively (Fig 4). Routine X-ray surveillance did not reveal any Palmaz stent fracture or migration.
 

Figure 4. Alteration in sac size over time
Patient 2 developed type 1e endoleak, patients 9 and 11 developed type 2 endoleaks
 
One patient required secondary endovascular re-intervention for occluded left iliac limb at day 10 postoperatively. It was due to a tortuous iliac system causing an acute bend to the stent graft limb. There was no other reported secondary intervention. Seven patients have since died (at 6, 9, 16, 16, 20, 24, and 59 postoperative months) from non-aneurysm–related causes. Five remain alive at the time of writing.
 
Discussion
Hostile aortic neck anatomy often precludes endovascular treatment of AAA. It has been shown in large clinical cohorts that up to 42% of EVARs are performed outside the instructions for use for commercially available stent grafts.3 17 18 19 Multiple measures have been developed to include more of these difficult necks for endovascular treatment. Evolution of stent graft design including suprarenal fixation20 and renal and mesenteric fenestration21 are examples. Adjunctive neck measures, eg endostapling,22 23 proximal fibrin glue embolisation,24 open aortic neck banding,25 and proximal covered cuff extension10 may be used in cases of perioperative type 1a endoleak following EVAR. Endostapling and glue embolisation, though minimally invasive, are not always feasible and risk major aortic injury. Open aortic neck banding requires laparotomy. Proximal cuff extension is only feasible if there is an additional sealing zone to the most caudal renal artery. Since we routinely landed our stent graft at the level of the lowest renal artery, this technique was not usually practical. The simplest and most well-recognised manoeuvre remains placement of a proximal Palmaz stent.
 
The morphology of the infrarenal aortic neck is important in securing the proximal landing zone. Three-dimensional workstation planning has been considered useful by many26 27; for example, Sobocinski et al28 showed that it reduced the rate of type 1 endoleaks and Velazquez et al29 indicated that it decreased the rate of extra iliac extension. We emphasise the importance of proper pre-EVAR planning, as this is one of the obvious factors that may compromise long-term durability and outcome. The fact that half of stent graft usage in our series was outside the instructions for use and most patients had one or more hostile neck feature rendered them at high risk of proximal endoleak. Under these circumstances, a Palmaz stent should always be readily available during EVAR.
 
Multiple series have reported their experience in its successful use. Early study by Dias et al9 reported nine patients who received a Palmaz stent and in whom aneurysm remained excluded at a median follow-up of 13 months (range, 6-24 months). Rajani et al10 reported successful treatment of intra-operative type 1 endoleak with Palmaz stent in 27 patients who had no recurrence at follow-up, although length of follow-up was not mentioned. Arthurs et al11 reported no type 1 endoleak in 31 patients after a median follow-up of 53 months (interquartile range, 14-91 months). The Palmaz stent was effective across a variety of available devices with suprarenal fixation (eg Cook Zenith) or infrarenal fixation (eg Gore Excluder; WL Gore & Associates, Inc, Newark [DE], US). Our results are in agreement with these findings.
 
Other series have shown controversial results. Farley et al15 reported 18 cases of Palmaz stent placement. Technical placement failed in one patient, in whom attempts at passing the access sheath to the proximal landing zone resulted in proximal migration of the main body of the aortic stent graft. An attempt at passage of the balloon-mounted stent without sheath protection resulted in slippage of the stent from the balloon. The stent could not be retrieved and was deployed in the iliac limb. With a mean follow-up period of 254 days in the 17 successful Palmaz stent placements, one patient had unresolved type 1 endoleak. Malposition of the stent was not an unusual complication.30 31 Kim et al32 described a deployment technique to ensure accuracy. Palmaz stent was asymmetrically hand-crimped on an appropriately sized valvuloplasty balloon that assured the proximal aspect would deploy first.
 
Some series have advocated prophylactic Palmaz stent placement in hostile necks, including 15 patients reported by Cox et al.13 One (7%) patient had secondary endoleak with intervention after a mean follow-up of 12 months. Qu and Raithel14 reported 117 cases of difficult neck treated with unibody Powerlink device (Endologix Inc, Irvine [CA], US). In this series, 83 (72.8%) had proximal Palmaz stent as an adjunctive procedure. Proximal cuff extension was also used. The mean follow-up was 2.6 years (range, 4 months-5 years). Results were satisfactory with an overall re-intervention rate of 5.3%, and no device migration, conversion, or post-EVAR rupture.
 
Palmaz stents were routinely placed at an infrarenal position in our unit on the basis that future extension with a fenestrated cuff is possible. If a transrenal position is adopted, the strut of the Palmaz stent, which is a very tight space, may hinder catheterisation of renal or visceral arteries should a fenestrated cuff be inserted. This is not absolute, however. Oikonomou et al33 reported a case of post-treatment with Powerlink stent graft and transrenal Palmaz stent. Treatment of proximal endoleak at 3 years after operation was successful by means of a proximal fenestrated graft. Selective catheterisation of both renal arteries and dilation of the stent struts prior to stent graft repair ensured that it would be feasible to catheterise the renal arteries through the fenestrated cuff.
 
There are limitations to this study. The retrospective nature of our cohort may risk inaccurate information. The efficacy of the Palmaz stent in aneurysms with short infrarenal neck may not be tested, as the majority were considered straight for custom-made fenestrated or branched EVAR. In our limited experience, a Palmaz stent is a valuable tool to expand the boundary of endovascular treatment for AAA.
 
Conclusion
Palmaz stent helps proximal sealing and fixation. In our experience, Palmaz stenting is effective and safe as a salvage treatment of immediate proximal endoleak during EVAR. We emphasise the importance of appropriate patient selection, pre-EVAR planning, and diligent follow-up.
 
Declaration
All authors have disclosed no conflicts of interest.
 
References
1. Bachoo P, Verhoeven EL, Larzon T. Early outcome of endovascular aneurysm repair in challenging aortic neck morphology based on experience from the GREAT C3 registry. J Cardiovasc Surg (Torino) 2013;54:573-80.
2. Matsumoto T, Tanaka S, Okadome J, et al. Midterm outcomes of endovascular repair for abdominal aortic aneurysms with the on-label use compared with the off-label use of an endoprosthesis. Surg Today 2015;45:880-5. Crossref
3. Hoshina K, Hashimoto T, Kato M, Ohkubo N, Shigematsu K, Miyata T. Feasibility of endovascular abdominal aortic aneurysm repair outside of the instructions for use and morphological changes at 3 years after the procedure. Ann Vasc Dis 2014;7:34-9. Crossref
4. Buth J, Harris PL, van Marrewijk C, Fransen G. The significance and management of different types of endoleaks. Semin Vasc Surg 2003;16:95-102. Crossref
5. Stanley BM, Semmens JB, Mai Q, et al. Evaluation of patient selection guidelines for endoluminal AAA repair with the Zenith Stent-Graft: the Australasian experience. J Endovasc Ther 2001;8:457-64. Crossref
6. Fransen GA, Vallabhaneni SR Sr, van Marrewijk CJ, et al. Rupture of infra-renal aortic aneurysm after endovascular repair: a series from EUROSTAR registry. Eur J Vasc Endovasc Surg 2003;26:487-93. Crossref
7. Chuter T, Greenberg RK. Standardized off-the-shelf components for multibranched endovascular repair of thoracoabdominal aortic aneurysms. Perspect Vasc Surg Endovasc Ther 2011;23:195-201. Crossref
8. Ou J, Chan YC, Cheng SW. A systematic review of fenestrated endovascular repair for juxtarenal and short-neck aortic aneurysm: evidence so far. Ann Vasc Surg 2015;29:1680-8. Crossref
9. Dias NV, Resch T, Malina M, Lindblad B, Ivancev K. Intraoperative proximal endoleaks during AAA stent-graft repair: evaluation of risk factors and treatment with Palmaz stents. J Endovasc Ther 2001;8:268-73. Crossref
10. Rajani RR, Arthurs ZM, Srivastava SD, Lyden SP, Clair DG, Eagleton MJ. Repairing immediate proximal endoleaks during abdominal aortic aneurysm repair. J Vasc Surg 2011;53:1174-7. Crossref
11. Arthurs ZM, Lyden SP, Rajani RR, Eagleton MJ, Clair DG. Long-term outcomes of Palmaz stent placement for intraoperative type Ia endoleak during endovascular aneurysm repair. Ann Vasc Surg 2011;25:120-6. Crossref
12. Chung J, Corriere MA, Milner R, et al. Midterm results of adjunctive neck therapies performed during elective infrarenal aortic aneurysm repair. J Vasc Surg 2010;52:1435-41. Crossref
13. Cox DE, Jacobs DL, Motaganahalli RL, Wittgen CM, Peterson GJ. Outcomes of endovascular AAA repair in patients with hostile neck anatomy using adjunctive balloon-expandable stents. Vasc Endovascular Surg 2006;40:35-40. Crossref
14. Qu L, Raithel D. Experience with the Endologix Powerlink endograft in endovascular repair of abdominal aortic aneurysms with short and angulated necks. Perspect Vasc Surg Endovasc Ther 2008;20:158-66. Crossref
15. Farley SM, Rigberg D, Jimenez JC, Moore W, Quinones-Baldrich W. A retrospective review of Palmaz stenting of the aortic neck for endovascular aneurysm repair. Ann Vasc Surg 2011;25:735-9. Crossref
16. Filis KA, Arko FR, Rubin GD, Zarins CK. Three-dimensional CT evaluation for endovascular abdominal aortic aneurysm repair. Quantitative assessment of the infrarenal aortic neck. Acta Chir Belg 2003;103:81-6. Crossref
17. Walker J, Tucker LY, Goodney P, et al. Adherence to endovascular aortic aneurysm repair device instructions for use guidelines has no impact on outcomes. J Vasc Surg 2015;61:1151-9. Crossref
18. Igari K, Kudo T, Toyofuku T, Jibiki M, Inoue Y. Outcomes following endovascular abdominal aortic aneurysm repair both within and outside of the instructions for use. Ann Thorac Cardiovasc Surg 2014;20:61-6. Crossref
19. Lee JT, Ullery BW, Zarins CK, Olcott C 4th, Harris EJ Jr, Dalman RL. EVAR deployment in anatomically challenging necks outside the IFU. Eur J Vasc Endovasc Surg 2013;46:65-73. Crossref
20. Robbins M, Kritpracha B, Beebe HG, Criado FJ, Daoud Y, Comerota AJ. Suprarenal endograft fixation avoids adverse outcomes associated with aortic neck angulation. Ann Vasc Surg 2005;19:172-7. Crossref
21. Verhoeven EL, Vourliotakis G, Bos WT, et al. Fenestrated stent grafting for short-necked and juxtarenal abdominal aortic aneurysm: an 8-year single-centre experience. Eur J Vasc Endovasc Surg 2010;39:529-36. Crossref
22. Donas KP, Kafetzakis A, Umscheid T, Tessarek J, Torsello G. Vascular endostapling: new concept for endovascular fixation of aortic stent-grafts. J Endovasc Ther 2008;15:499-503. Crossref
23. Avci M, Vos JA, Kolvenbach RR, et al. The use of endoanchors in repair EVAR cases to improve proximal endograft fixation. J Cardiovasc Surg (Torino) 2012;53:419-26.
24. Feng JX, Lu QS, Jing ZP, et al. Fibrin glue embolization treating intra-operative type I endoleak of endovascular repair of abdominal aortic aneurysm: long-term result [in Chinese]. Zhonghua Wai Ke Za Zhi 2011;49:883-7.
25. Scarcello E, Serra R, Morrone F, Tarsitano S, Triggiani G, de Franciscis S. Aortic banding and endovascular aneurysm repair in a case of juxtarenal aortic aneurysm with unsuitable infrarenal neck. J Vasc Surg 2012;56:208-11. Crossref
26. Lee WA. Endovascular abdominal aortic aneurysm sizing and case planning using the TeraRecon Aquarius workstation. Vasc Endovascular Surg 2007;41:61-7. Crossref
27. Parker MV, O’Donnell SD, Chang AS, et al. What imaging studies are necessary for abdominal aortic endograft sizing? A prospective blinded study using conventional computed tomography, aortography, and three-dimensional computed tomography. J Vasc Surg 2005;41:199-205. Crossref
28. Sobocinski J, Chenorhokian H, Maurel B, et al. The benefits of EVAR planning using a 3D workstation. Eur J Vasc Endovasc Surg 2013;46:418-23. Crossref
29. Velazquez OC, Woo EY, Carpenter JP, Golden MA, Barker CF, Fairman RM. Decreased use of iliac extensions and reduced graft junctions with software-assisted centerline measurements in selection of endograft components for endovascular aneurysm repair. J Vasc Surg 2004;40:222-7. Crossref
30. Gabelmann A, Krämer SC, Tomczak R, Görich J. Percutaneous techniques for managing maldeployed or migrated stents. J Endovasc Ther 2001;8:291-302. Crossref
31. Slonim SM, Dake MD, Razavi MK, et al. Management of misplaced or migrated endovascular stents. J Vasc Interv Radiol 1999;10:851-9. Crossref
32. Kim JK, Noll RE Jr, Tonnessen BH, Sternbergh WC 3rd. A technique for increased accuracy in the placement of the “giant” Palmaz stent for treatment of type IA endoleak after endovascular abdominal aneurysm repair. J Vasc Surg 2008;48:755-7. Crossref
33. Oikonomou K, Botos B, Bracale UM, Verhoeven EL. Proximal type I endoleak after previous EVAR with Palmaz stents crossing the renal arteries: treatment using a fenestrated cuff. J Endovasc Ther 2012;19:672-6. Crossref

Nephrolithiasis among male patients with newly diagnosed gout

Hong Kong Med J 2016 Dec;22(6):534–7 | Epub 9 Sep 2016
DOI: 10.12809/hkmj154694
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Nephrolithiasis among male patients with newly diagnosed gout
KS Wan, MD, PhD1,2; CK Liu, MD, MPH3,4; MC Ko, MD3; WK Lee, MD3; CS Huang, MD2
1 Department of Immunology and Rheumatology, Taipei City Hospital-Zhongxing Branch, Taiwan
2 Department of Pediatrics, Taipei City Hospital-Renai Branch, Taiwan
3 Department of Urology, Taipei City Hospital-Zhongxing Branch, Taiwan
4 Fu Jen Catholic University School of Medicine, Taiwan
 
Corresponding author: Dr KS Wan (gwan1998@gmail.com)
 
 
 Full paper in PDF
 
Abstract
Introduction: An elevated serum urate level is recognised as a cause of gouty arthritis and uric acid stone. The level of serum uric acid that accelerates kidney stone formation, however, has not yet been clarified. This study aimed to find out if a high serum urate level is associated with nephrolithiasis.
 
Methods: Patients were recruited from the rheumatology clinic of Taipei City Hospital (Renai and Zhongxing branches) in Taiwan from March 2015 to February 2016. A total of 120 Chinese male patients with newly diagnosed gout and serum urate concentration of >7 mg/dL and no history of kidney stones were divided into two groups according to their serum urate level: <10 mg/dL (group 1, n=80) and ≥10 mg/dL (group 2, n=40). The mean body mass index, blood urea nitrogen level, creatinine level, urinary pH, and kidney ultrasonography were compared between the two groups.
 
Results: There were no significant differences in blood urea nitrogen or creatinine level between the two groups. The urine pH in both groups was similar and not statistically significant. Kidney stone formation was detected via ultrasonography in 6.3% (5/80) and 82.5% (33/40) of patients in groups 1 and 2, respectively (P<0.05).
 
Conclusion: A serum urate level of ≥10 mg/dL may precipitate nephrolithiasis. Further studies are warranted to substantiate the relationship between serum urate level and kidney stone formation.
 
 
New knowledge added by this study
  • Hyperuricaemia is a risk factor for renal stone formation, which is associated with a substantially higher prevalence of nephrolithiasis on ultrasonography.
  • Patients with gouty arthritis and serum urate level of ≥10 mg/dL should be advised to have renal ultrasonography.
 
 
Introduction
Over the past century, kidney stones have become increasingly prevalent, particularly in more developed countries. The incidence of urolithiasis in a given population is dependent on the geographic area, racial distribution, socio-economic status, and dietary habits.1 In general, patients with a history of gout are at greater risk of forming uric acid stones, as are patients with obesity, diabetes, or complete metabolic syndrome.2 Moreover, elevated serum urate levels are known to lead to gouty arthritis, tophi formation, and uric acid kidney stones.3 The incidence of uric acid stones varies between countries and accounts for 5% to 40% of all urinary calculi.4 Certain risk factors may be involved in the pathogenesis of uric acid nephrolithiasis, including low urinary volume and persistently low urinary pH.5
 
Calcium oxalate stones may form in some patients with gouty diathesis due to increased urinary excretion of calcium and reduced excretion of citrate. In addition, relative hypercalciuria in gouty diathesis with calcium oxalate stones may be due to intestinal hyperabsorption of calcium.6 Most urinary uric acid calculi are not pure in composition and complex urates, sodium, potassium, and calcium have been found together in various proportions.7 An analysis of stones in gout patients in Japan showed that the incidence of common calcium salt stones was over 60%, while that of uric acid stones was only 30%.8 This implies that the disruption of uric acid metabolism promotes not only uric acid stones, but also calcium salt stones. Therefore, a high serum urate level might be associated with nephrolithiasis and this provided the rationale for this study.
 
Methods
Overall, 120 male gouty arthritis patients with newly diagnosed gout and serum urate concentration of >7 mg/dL, and without previous kidney stone disease were allocated to one of the two groups according to their serum uric acid level: <10 mg/dL (group 1, n=80) and ≥10 mg/dL (group 2, n=40). Patients were recruited from the rheumatology clinic of Taipei City Hospital (Renai and Zhongxing branches), a tertiary community hospital in Taiwan, from March 2015 to February 2016. They had been newly diagnosed with gout but had no clinical suggestions of renal stone disease. The exclusion criteria included previously treated gouty arthritis and current prescription of urate reabsorption inhibitors. The patient’s age, duration of gout arthritis, presence of tophi, body mass index (BMI), blood urea nitrogen (BUN), creatinine, urinary pH, and kidney ultrasonography were all measured and analysed. This study has been approved by the hospital’s Institutional Review Board with informed consent waived.
 
Results for continuous variables were given as means ± standard deviations. Student’s t test was used to compare the physical characteristics that were continuous in nature among the different subject groups and the Chi squared test was used to compare the difference in the stone detection rate between the two groups. A P value of <0.05 was regarded as statistically significant for two-sided tests. The Statistical Package for the Social Sciences (Windows version 12.0; SPSS Inc, Chicago [IL], US) was used for all statistical analyses.
 
Results
The mean age of the two study groups was similar (40 years). Family history of gout was present in 67.5% and 90% of groups 1 and 2, respectively. The time elapsed since onset of gout was less than 4 years in both groups. Tophaceous gout was found in 8.8% in group 1 and 10.0% in group 2. The prevalence of patients with a BMI of ≥30 kg/m2 was not statistically significant between the two groups. Only 6% of group 2 patients with kidney stones had a BMI of >95th percentile. In most cases, urinary pH was less than 5.5 in both groups and there were no abnormal changes to BUN or creatinine levels. Interestingly, the prevalence of kidney stones detected by ultrasonography was 6.3% in group 1 and 82.5% in group 2 (P<0.05). The sensitivity and specificity of high serum urate level (>10 mg/dL) in predicting kidney stones was 87% and 91%, respectively (Table).
 

Table. Risk factors of male gout patients with and without nephrolithiasis
 
Discussion
Gout is a common metabolic disorder characterised by chronic hyperuricaemia, and serum urate level of >6.8 mg/dL that exceeds the physiological threshold of saturation. Urolithiasis is one of the well-known complications of gout. We hypothesise that serum urate level can be used as a predictive marker for urolithiasis. Uric acid, a weak organic acid, has very low pH-dependent solubility in aqueous solution. Approximately 70% of urate elimination occurs in urine, and the kidney plays a dominant role in determining plasma level.9 A serum urate level of >7 mg/dL is recognised as leading to gouty arthritis and uric acid stone formation. Moreover, recent epidemiological studies have identified serum urate elevation as an independent risk factor for chronic kidney disease, cardiovascular disease, and hypertension.3 Impaired renal uric acid excretion is the major mechanism of hyperuricaemia in patients with primary gout.10 The molecular mechanisms of renal urate transport are still incompletely understood. Urate transporter 1 is an organic anion transporter with highly specific urate transport activity, exchanging this anion with others including most of the endogenous organic anions and drug anions that are known to affect renal uric acid transport.10 11
 
Uric acid stones account for 10% of all kidney stones and are the second most common cause of urinary stones after calcium oxalate and calcium phosphate. The most important risk factor for uric acid crystallisation and stone formation is a low urine pH (<5.5) rather than an increased urinary uric acid excretion.12 The proportion of uric acid stones varies between countries and accounts for 5% to 40% of all urinary calculi.4 Uric acid homeostasis is determined by the balance between its production, intestinal secretion, and renal excretion. The kidney is an important regulator of circulating uric acid levels by reabsorbing about 90% of filtered urate and being responsible for 60% to 70% of total body uric acid factor underpinning hyperuricaemia and gout.13 Pure uric acid stones are radiolucent but well visualised on renal ultrasound or non-contrast helical computed tomographic scanning; the latter is especially good for detection of stones which are <5 mm in size.14 Nonetheless the reason why most patients with gout present with acidic urine, even though only 20% have uric acid stones, remains unclear. In a US study, the prevalence of kidney stone disease was almost two-fold higher in men with a history of gout than in those without (15% vs 8%).15 Higher adiposity and weight gain are strong risk factors for gout in men, while weight loss is protective.15 An analysis by Shimizu8 of stones in gout patients revealed that the proportion of common calcium salt stones was over 60%, while that of uric acid stones was only about 30%. Overweight/obesity and older age associated with low urine pH were the principal characteristics of ‘pure’ uric acid stone formers. Impaired urate excretion associated with increased serum uric acid is also another characteristic of uric acid stone formers and resembles patients with primary gout. Patients with pure calcium oxalate stones were younger; they had a low proportion of obese subjects and higher urinary calcium.16
 
Conventionally, BMI was stratified as normal (<25 kg/m2), overweight (25-29.9 kg/m2), or obese (≥30 kg/m2). In males, the proportion of uric acid stones gradually increased with BMI, from 7.1% in normal BMI to 28.7% in obese subjects.17 The same was true in females, with the proportion of uric acid stones rising from 6.1% in normal BMI to 17.1% in obese subjects.17 Studies found that BMI is associated with an increased risk of kidney stone disease, but with a BMI of >30 kg/m2, further increases do not appear to significantly increase the risk of stone disease.17 18 An independent association between kidney stone disease and gout strongly suggests that they share common underlying pathophysiological mechanisms.19
 
Three major conditions control the potential for uric acid stone formation: the quantity of uric acid, the volume of urine as it affects the urinary concentration of uric acid, and the urinary pH.20 Two major abnormalities have been suggested to explain overly acidic urine: increased net acid excretion and impaired buffering caused by defective urinary ammonium excretion, with the combination resulting in abnormally acidic urine.21 Urinary alkalisation, which involves maintaining a continuously high urinary pH (pH 6-6.5), is considered by some or many to be the treatment of choice for uric acid stone dissolution and prevention.20 In general, gout is caused by the deposition of monosodium urate crystals in tissue that provokes a local inflammatory reaction. The formation of monosodium urate crystals is facilitated by hyperuricaemia. In a study by Sakhaee and Maalouf,21 being overweight and of older age were associated with low urine pH and one of the principal characteristics of pure uric acid stone formation. Impaired urate excretion associated with increased serum uric acid was another characteristic of uric acid stone formation that resembles patients with primary gout.
 
The limitations of this current study included the lack of measurement of uric acid concentration of urine in the participants, no further computed tomographic scanning for kidney stones, no analysis of stone composition, and limited representativeness of the study subjects. For example, there were only 10 obese patients (BMI ≥30 kg/m2) in the analysis. In this study, hyperuricaemia was a risk factor for kidney stone formation. Patients with serum urate level of >10 mg/dL should undergo ultrasound examination to look for any nephrolithiasis.
 
Declaration
All authors have disclosed no conflicts of interest.
 
References
1. López M, Hoppe B. History, epidemiology and regional diversities of urolithiasis. Pediatr Nephrol 2010;25:49-59. Crossref
2. Liebman SE, Taylor JG, Bushinsky DA. Uric acid nephrolithiasis. Curr Rheumatol Rep 2007;9:251-7. Crossref
3. Edwards NL. The role of hyperuricemia and gout in kidney and cardiovascular disease. Cleve Clin J Med 2008;75 Suppl 5:S13-6. Crossref
4. Shekarriz B, Stoller ML. Uric acid nephrolithiasis: current concepts and controversies. J Urol 2002;168:1307-14. Crossref
5. Ngo TC, Assimos DG. Uric acid nephrolithiasis: recent progress and future directions. Rev Urol 2007;9:17-27.
6. Pak CY, Moe OW, Sakhaee K, Peterson RD, Poindexter JR. Physicochemical metabolic characteristics for calcium oxalate stone formation in patients with gouty diathesis. J Urol 2005;173:1606-9. Crossref
7. Bellanato J, Cifuentes JL, Salvador E, Medina JA. Urates in uric acid renal calculi. Int J Urol 2009;16:318-21; discussion 322. Crossref
8. Shimizu T. Urolithiasis and nephropathy complicated with gout [in Japanese]. Nihon Rinsho 2008;66:717-22.
9. Marangella M. Uric acid elimination in the urine. Pathophysiological implications. Contrib Nephrol 2005;147:132-48.
10. Taniquchi A, Kamatani N. Control of renal uric acid excretion and gout. Curr Opin Rheumatol 2008;20:192-7. Crossref
11. Yamauchi T, Ueda T. Primary hyperuricemia due to decreased renal uric acid excretion [in Japanese]. Nihon Rinsho 2008;66:679-81.
12. Ferrari P, Bonny O. Diagnosis and prevention of uric acid stones [in German]. Ther Umsch 2004;61:571-4. Crossref
13. Bobulescu IA, Moe OW. Renal transport of uric acid: evolving concepts and uncertainties. Adv Chronic Kidney Dis 2012;19:358-71. Crossref
14. Wiederkehr MR, Moe OW. Uric acid nephrolithiasis: a systemic metabolic disorder. Clin Rev Bone Miner Metab 2011;9:207-17. Crossref
15. Choi HK. Atkinson K, Karison EW, Curhan G. Obesity, weight change, hypertension, diuretic use, and risk of gout in men: the health professionals follow-up study. Arch Intern Med 2005;165:742-8. Crossref
16. Negri AL, Spivacow R, Del Valle E, et al. Clinical and biochemical profile of patients with “pure” uric acid nephrolithiasis compared with “pure” calcium oxalate stone formers. Urol Res 2007;35:247-51. Crossref
17. Daudon M, Lacour B, Jungers P. Influence of body size on urinary stone composition in men and women. Urol Res 2006;34:193-9. Crossref
18. Semins MJ, Shore AD, Makary MA, Magnuson T, Johns R, Matlaga BR. The association of increasing body mass index and kidney stone disease. J Urol 2010;183:571-5. Crossref
19. Kramer HM, Curhan G. The association between gout and nephrolithiasis: the National Health and Nutrition Examination Survey III, 1988-1994. Am J Kidney Dis 2002;40:37-42. Crossref
20. Cicerello E, Merlo F, Maccatrozzo L. Urinary alkalization for the treatment of uric acid nephrolithiasis. Arch Ital Urol Androl 2010;82:145-8.
21. Sakhaee K, Maalouf NM. Metabolic syndrome and uric acid nephrolithiasis. Semin Nephrol 2008;28:174-80. Crossref

Silver-Russell syndrome in Hong Kong

Hong Kong Med J 2016 Dec;22(6):526–33 | Epub 29 Jul 2016
DOI: 10.12809/hkmj154750
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Silver-Russell syndrome in Hong Kong
HM Luk, MB, BS, FHKAM (Paediatrics)1#; KS Yeung, BSc, MPhil2#; WL Wong, BSc, MPhil2; Brian HY Chung, FHKAM (Paediatrics), FCCMG (Clinical Genetics)2; Tony MF Tong, MPhil, MSc1; Ivan FM Lo, MB, ChB, FHKAM (Paediatrics)1
1 Clinical Genetic Service, Department of Health, 3/F, Cheung Sha Wan Jockey Club Clinic, 2 Kwong Lee Road, Sham Shui Po, Hong Kong
2 Department of Paediatrics and Adolescent Medicine, Queen Mary Hospital, The University of Hong Kong, Pokfulam, Hong Kong
 
# Co-first author
 
Corresponding author: Dr Ivan FM Lo (con_cg@dh.gov.hk)
 
 Full paper in PDF
Abstract
Objectives: To examine the molecular pathogenetic mechanisms, (epi)genotype-phenotype correlation, and the performance of the three clinical scoring systems—namely Netchine et al, Bartholdi et al, and Birmingham scores—for patients with Silver-Russell syndrome in Hong Kong.
 
Methods: This retrospective case series was conducted at two tertiary genetic clinics, the Clinical Genetic Service, Department of Health, and clinical genetic clinic in Queen Mary Hospital in Hong Kong. All records of patients with suspected Silver-Russell syndrome under the care of the two genetic clinics between January 2010 and September 2015 were retrieved from the computer database.
 
Results: Of the 28 live-birth patients with Silver-Russell syndrome, 35.7% had H19 loss of DNA methylation, 21.4% had maternal uniparental disomy of chromosome 7, 3.6% had mosaic maternal uniparental disomy of chromosome 11, and the remaining 39.3% were Silver-Russell syndrome of unexplained molecular origin. No significant correlation between (epi)genotype and phenotype could be identified between H19 loss of DNA methylation and maternal uniparental disomy of chromosome 7. Comparison of molecularly confirmed patients and patients with Silver-Russell syndrome of unexplained origin revealed that postnatal microcephaly and café-au-lait spots were more common in the latter group, and body and limb asymmetry was more common in the former group. Performance analysis showed the Netchine et al and Birmingham scoring systems had similar sensitivity in identifying Hong Kong Chinese subjects with Silver-Russell syndrome.
 
Conclusion: This is the first territory-wide study of Silver-Russell syndrome in Hong Kong. The clinical features and the spectrum of underlying epigenetic defects were comparable to those reported in western populations.
 
New knowledge added by this study
  • The epigenetic defects of Silver-Russell syndrome (SRS) in Hong Kong Chinese patients are comparable to those reported in western populations.
  • No epigenotype-phenotype correlation was demonstrated among SRS patients in this study.
Implications for clinical practice or policy
  • All suspected SRS patients should be referred to a genetic clinic for assessment.
  • A new diagnostic algorithm has been proposed for Chinese patients with SRS.
 
 
Introduction
Silver-Russell syndrome (SRS) [OMIM 180860] is a clinically and genetically heterogeneous congenital imprinting disorder. It was first described in 1953 by Dr Henry Silver and his colleagues, who reported two children with short stature and congenital hemihypertrophy.1 In the following year, Dr Alexander Russell reported five similar cases with intrauterine dwarfism and craniofacial dysostosis.2 The term SRS has been used since 1970 to describe a constellation of features with intrauterine growth retardation without postnatal catch-up, distinct facial characteristics, relative macrocephaly, body asymmetry, and/or fifth finger clinodactyly.3 4 The prevalence of SRS was estimated to be 1 in 100 000,5 but was probably underestimated due to the diverse and variable clinical manifestations. The majority of SRS cases are sporadic, although occasional familial cases have been reported.
 
Two major molecular mechanisms have been implicated in SRS—maternal uniparental disomy of chromosome 7 (mUPD7)6 and loss of DNA methylation (LOM) of the imprinting control region 1 (ICR1) on the paternal allele of chromosome 11p15 region that regulates the IGF2/H19 locus.6 7 8 9 According to the studies, LOM of ICR1 and mUPD7 roughly account for 45% to 50% and 5% to 10% of SRS cases, respectively.6 7 8 9 Rare cytogenetic rearrangements have also been reported in 1% to 2% of cases.4 10 11 There remain 30% to 40% of SRS cases in which the molecular mechanisms remain elusive, however.
 
Owing to the wide spectrum of clinical presentations of SRS, there is considerable clinical overlap with other growth retardation syndromes. At present there is no consensus for the diagnostic criteria, so diagnosing SRS is challenging. Several scoring systems have been proposed to facilitate clinical diagnosis and to guide genetic testing.7 11 12 13 14 Based on the prevalence of different molecular mechanisms, methylation study of the 11p15 region is the recommended first-tier investigation for patients with suspected SRS, and mUPD7 analysis is the second tier.14
 
The comprehensive clinical spectrum and molecular study of SRS have not been reported in the Chinese population. Therefore, a retrospective review that aimed to summarise the clinical and genetic findings of all SRS patients in Hong Kong was conducted. The sensitivity and specificity of different scoring systems7 11 12 13 14 in identifying Hong Kong Chinese SRS patients have also been studied.
 
Methods
Patients
The Clinical Genetic Service (CGS), Department of Health and the clinical genetic clinic at Queen Mary Hospital (QMH), The University of Hong Kong, are the only two tertiary genetic referral centres that provide comprehensive genetic counselling, and diagnostic and laboratory service for the Hong Kong population. Patients with a clinical suspicion of growth failure due to genetic causes or possibly SRS were referred for assessment and genetic testing.
 
In this review, all records of patients with suspected SRS seen at the CGS or clinical genetic clinic of QMH between January 2010 and September 2015 were retrieved from the computer database system using the key words of “Silver Russell syndrome” and “failure to thrive and growth retardation”. The clinical and laboratory data of these patients were retrospectively analysed. Patients with alternative diagnoses after assessment and genetic investigation were excluded. This study was done in accordance with the principles outlined in the Declaration of Helsinki.
 
Clinical diagnostic criteria for Silver-Russell syndrome in this study
Currently, there is no universal consensus on the diagnostic criteria of SRS, but the Hitchins et al’s criteria15 are the most commonly used clinically. The diagnosis of SRS in this study was made when a patient fulfilled three major, or two major and two minor criteria.
 
Major criteria included (1) intrauterine growth retardation/small for gestational age (<10th percentile); (2) postnatal growth with height/length <3rd percentile; (3) normal head circumference (3rd-97th percentile); and (4) limb, body, and/or facial asymmetry.
 
Minor criteria included (1) short arm span with normal upper-to-lower segment ratio; (2) fifth finger clinodactyly; (3) triangular facies; and (4) frontal bossing/prominent forehead.
 
Epimutation in imprinting control region 1
Investigation of the methylation status and copy number change of the H19 differentially methylated region (H19 DMR) and KvDMR1 at chromosome 11p15 region was done with methylation specific–multiplex ligation-dependent probe amplification (MS-MLPA) method, using SALSA MLPA ME030-B1 BWS/RSS kit (MRC-Holland, Amsterdam, The Netherlands). Following the manufacturer’s instructions, approximately 100 ng genomic DNA was first denatured and hybridised overnight with the probe mixture supplied with the kit. The samples were then split into two portions, treated either with ligase alone or with ligase and HhaI. Polymerase chain reactions (PCR) were then performed with the reagents and primers supplied in the kit. The PCR products were separated by capillary electrophoresis (model 3130xl; Applied Biosystems, Foster City [CA], US). The electropherograms were analysed using GeneScan software (Applied Biosystems, Foster City [CA], US), and the relative peak area was calculated using the Coffalyser version 9.4 software (MRC-Holland, Amsterdam, The Netherlands).
 
Analysis of maternal uniparental disomy of chromosome 7
We studied mUPD7 with eight polymorphic microsatellite markers, three on 7p and five on 7q (D7S531, D7S507, D7S2552, D7S2429, D7S2504, D7S500, D7S2442, and D7S2465), using a standard protocol. Haplotype analysis was then performed. A diagnosis of mUPD7 required evidence of exclusive maternal inheritance at two or more informative markers.
 
Data analysis and (epi)genotype-phenotype correlation
Epidemiological data, physical characteristics, growth records, and molecular findings were then collected for analysis. Clinical photographs were taken during consultation (Fig 1). In order to delineate the (epi)genotype-phenotype correlation, we divided the patients according to their (epi)genotype, namely H19 LOM, mUPD7, mosaic maternal uniparental disomy of chromosome 11 (mUPD11), or SRS of unexplained origin. The SRS of unexplained origin was defined as negative for 11p15 region epimutation and mUPD7 study. For statistical calculation, Student’s t test was used for continuous variables and Fisher’s exact test for categorical variables. Two-tailed P values were also computed. Differences were considered to be statistically significant when P≤0.05.
 

Figure 1. Clinical photos for molecularly confirmed SRS in this study
Patients with (a to d) mUPD7 and (e to g) H19 LOM. All had relative macrocephaly, frontal bossing, triangular face, and pointed chin. Patients showing (e) fifth finger clinodactyly and (f) body asymmetry. (h) Informative microsatellite markers in UPD study that shows mUPD7
 
Clinical score
Three clinical scoring systems were applied to all patients referred with suspected SRS and included Netchine et al score,7 Bartholdi et al score,12 and the Birmingham score.14 An overview of the three SRS scoring systems is summarised in Table 1. Using the Hitchins et al’s criteria15 as standard in this study, the sensitivity and specificity of these three scoring systems in identifying SRS were compared.
 

Table 1. Comparison of three common clinical scoring systems for SRS
 
Results
During the study period, 83 patients with suspected SRS were referred to both genetic clinics. After clinical assessment and investigations, 54 patients had an alternative diagnosis. The remaining 29 patients were clinically diagnosed with SRS using the Hitchins et al’s criteria.15 All were Chinese. One was a prenatal case with maternal H19 duplication. Since termination of pregnancy was performed at 23 weeks of gestation, it was excluded for downstream analysis. For the remaining 28 SRS patients, their age at the end of the study (September 2015) ranged from 2 years to 22 years 9 months, with a median of 9 years 4 months. The male-to-female ratio was 9:5. Sequential MS-MLPA study on chromosome 11p15 region and mUPD7 study were performed on all SRS patients. Among the 28 live-birth SRS patients, 35.7% (n=10) had H19 LOM, 21.4% (n=6) had mUPD7, 3.6% (n=1) had mosaic mUPD11, and 39.3% (n=11) were of SRS of unexplained origin. The clinical features of the SRS cohort are summarised in Table 2. The clinical features of some molecularly confirmed SRS patients in this study and one illustrative microsatellite electropherogram in mUPD7 analysis are shown in Figure 1.
 

Table 2. Summary of the clinical features in different subgroups of SRS patients
 
In order to study the (epi)genotype-phenotype correlation among the H19 LOM and mUPD7 groups, the clinical features were compared. There was no significant difference among the two groups (data not shown). When comparing the 28 molecularly confirmed SRS and 54 SRS of unexplained origin patients, postnatal microcephaly (P=0.01) and café-au-lait spots (P=0.05) were more common among SRS of unexplained origin, while body asymmetry (P<0.01) and limb asymmetry (P<0.01) were more common among the molecularly confirmed group.
 
The performance of the three clinical scoring systems namely Netchine et al score,7 Bartholdi et al score,12 and Birmingham score14 in identifying SRS in our cohort was compared. The proportion of molecularly confirmed cases in those ‘likely SRS’ and ‘unlikely SRS’ based on the scoring system are summarised in Table 3. The sensitivity and specificity among different scoring systems for identifying SRS are summarised in Table 4.
 

Table 3. Proportion of different SRS subtypes with ‘likely SRS’ and ‘unlikely SRS’ score in different scoring systems in our cohort
 

Table 4. The sensitivity and specificity of the three clinical scoring systems compared with Hitchin et al’s criteria15 in identifying SRS in our cohort
 
Discussion
Silver-Russell syndrome is a clinically and genetically heterogeneous disorder. This is the first comprehensive clinical and epigenetic study of SRS in Hong Kong. With sequential 11p15 epimutation analysis and mUPD7 study of SRS patients in this cohort, molecular confirmation was achieved in 60.7% of cases; H19 LOM and mUPD7 accounted for 35.7% and 21.4% of the cases, respectively. Although the proportion of H19 LOM–related SRS cases was similar to the western and Japanese populations,6 7 8 9 16 the proportion of mUPD7 in our cohort was significantly higher. Nonetheless, due to the small sample size, this observation might not reflect the true ethnic-specific epigenetic alteration in the Chinese population. Further studies are necessary to confirm this difference.
 
In previous studies of (epi)genotype-phenotype correlation4 7 12 17 18 19 20 in SRS, patients with mUPD7 had a milder phenotype but were more likely to have developmental delay. On the contrary, patients with H19 LOM appeared to have more typical SRS features such as characteristic facial profile and body asymmetry. Such a correlation could not be demonstrated in our cohort. When comparing the molecularly confirmed and SRS of unexplained origin groups, postnatal microcephaly and café-au-lait spots were more common in the group of SRS of unexplained origin, while body/limb asymmetry was more common in the molecularly confirmed group. This observation has also been reported in Japanese SRS patients.16 This might be due to the greater clinical and genetic heterogeneity in the molecularly negative SRS.
 
Although SRS has been extensively studied, there remains no universal consensus on the clinical diagnostic criteria. Hitchins et al’s criteria15 are currently the most commonly used. In order to facilitate the clinical diagnosis, several additional scoring systems have been proposed which include the Netchine et al,7 Bartholdi et al,12 and Birmingham scores.14 Each of them has its advantages and limitations. The major caveats of those scoring systems include relative subjectivity of clinical signs, and time-dependent and evolving clinical features. The heterogeneity of clinical manifestations also limits their application. In order to validate these scoring systems, several studies have been performed to evaluate their accuracy in predicting the molecular genetic testing result.14 21 We also evaluated the performance of these three scoring systems in this Chinese cohort. All three scoring systems are 100% specific in diagnosing SRS, but the sensitivity for Netchine et al score,7 Bartholdi et al score,12 and Birmingham score14 is 75%, 53.6%, and 71.4%, respectively when compared with Hitchins et al’s criteria.15 This suggests that Hitchins et al’s criteria15 remain the most sensitive diagnostic criteria for SRS when used clinically.
 
The management of SRS is challenging and requires multidisciplinary input. Growth hormone (GH) treatment is the current recommended therapy for children with small for gestational age without spontaneous catch-up growth and those with GH deficiency. In SRS, abnormalities in spontaneous GH secretion and subnormal responses to provocative GH stimulation have been well reported.20 The proposed mechanism is dysregulation of the growth factors and its major binding protein,4 particularly in the H19 LOM group. Besides, SRS patients are expected to have poor catch-up growth. Nonetheless, GH therapy is not a universal standard treatment for SRS. In Hong Kong, the indications for GH therapy under Hospital Authority guidelines do not include SRS22 without GH response abnormalities. In our cohort, only three patients who had a suboptimal GH provocative stimulation test are currently receiving GH treatment. The long-term outcome is not yet known.
 
Although tissue-specific epigenetic manifestation has been reported in SRS,23 mosaic genetic or epigenetic alteration is uncommon.24 We have one patient with mUPD11 confirmed by molecular testing with peripheral blood and buccal swab samples. Mosaicism should be considered when a patient has typical SRS phenotype but negative routine testing. Testing of other tissue should be pursued so as to provide an accurate molecular diagnosis that can guide subsequent genetic counselling and clinical management.
 
Finally, upon review of the literature, it is well known that gain of function of the CDKN1C gene25 and maternal UPD14 (Temple syndrome)26 27 can result in a phenotype mimicking SRS. There are also other syndromic growth retardation disorders with many overlapping clinical features with those of SRS, such as mulibrey nanism and 3-M syndrome.28 29 Therefore, with the latest understanding of the molecular pathogenetic mechanisms of SRS, together with evidence21 30 31 and results from this study, we propose the diagnostic algorithm for Chinese SRS patients as depicted in Figure 2. All clinically suspected SRS patients should be assessed by a clinical geneticist. Although the Netchine et al score,7 Bartholdi et al score,12 and Birmingham score14 are highly specific, they are less sensitive than the Hitchins et al’s criteria15 for diagnosing SRS in our Chinese cohort. Therefore, the Hitchins et al’s criteria15 should be used clinically to classify those suspected SRS patients into ‘likely’ or ‘unlikely’ SRS. For those ‘likely SRS’ patients, sequential 11p15 region methylation study and mUPD7 analysis should be performed because 11p15 region epigenetic alteration is more prevalent than mUPD7 in SRS. For those molecularly unconfirmed SRS, further testing for other SRS-like syndromes including Temple syndrome or CDKN1C-related disorder should be pursued if indicated.
 

Figure 2. Proposed algorithm for management and genetic investigations for suspected SRS in Hong Kong
 
Conclusion
This 5-year review is the first territory-wide study of Chinese SRS patients in Hong Kong. It showed that the clinical features and underlying epigenetic mechanisms of Chinese SRS are similar to those of other western populations. Early diagnosis and multidisciplinary management are important for managing SRS patients. Vigilant clinical suspicion with confirmation by molecular testing is essential. Based on the current evidence and performance evaluation of different clinical scoring systems, a comprehensive diagnostic algorithm is proposed. We hope that with an increase in understanding of the underlying pathophysiology and the (epi)genotype-phenotype correlation in Chinese SRS patients, the quality of medical care will be greatly improved in the near future.
 
Acknowledgements
We thank all the paediatricians and physicians who have referred their SRS patients to our service and QMH. We are also grateful to all the laboratory staff in CGS for their technical support. This work in HKU is supported by HKU small project funding and The Society for the Relief of Disabled Children in Hong Kong.
 
Declaration
All authors have disclosed no conflicts of interest.
 
References
1. Silver HK, Kiyasu W, George J, Deamer WC. Syndrome of congenital hemihypertrophy, shortness of stature, and elevated urinary gonadotropins. Pediatrics 1953;12:368-76.
2. Russell A. A syndrome of intra-uterine dwarfism recognizable at birth with cranio-facial dysostosis, disproportionately short arms, and other anomalies (5 examples). Proc R Soc Med 1954;47:1040-4.
3. Wollmann HA, Kirchner T, Enders H, Preece MA, Ranke MB. Growth and symptoms in Silver-Russell syndrome: review on the basis of 386 patients. Eur J Pediatr 1995;154:958-68. Crossref
4. Wakeling EL, Amero SA, Alders M, et al. Epigenotype-phenotype correlations in Silver-Russell syndrome. J Med Genet 2010;47:760-8. Crossref
5. Christoforidis A, Maniadaki I, Stanhope R. Managing children with Russell-Silver syndrome: more than just growth hormone treatment? J Pediatr Endocrinol Metab 2005;18:651-2. Crossref
6. Kotzot D, Schmitt S, Bernasconi F, et al. Uniparental disomy 7 in Silver-Russell syndrome and primordial growth retardation. Hum Mol Genet 1995;4:583-7. Crossref
7. Netchine I, Rossignol S, Dufourg MN, et al. 11p15 Imprinting center region 1 loss of methylation is a common and specific cause of typical Russell-Silver syndrome: clinical scoring system and epigenetic-phenotypic correlations. J Clin Endocrinol Metab 2007;92:3148-54. Crossref
8. Gicquel C, Rossignol S, Cabrol S, et al. Epimutation of the telomeric imprinting center region on chromosome 11p15 in Silver-Russell syndrome. Nat Genet 2005;37:1003-7. Crossref
9. Schönherr N, Meyer E, Eggermann K, Ranke MB, Wollmann HA, Eggermann T. (Epi)mutations in 11p15 significantly contribute to Silver-Russell syndrome: but are they generally involved in growth retardation? Eur J Med Genet 2006;49:414-8. Crossref
10. Azzi S, Abi Habib W, Netchine I. Beckwith-Wiedemann and Russell-Silver Syndromes: from new molecular insights to the comprehension of imprinting regulation. Curr Opin Endocrinol Diabetes Obes 2014;21:30-8. Crossref
11. Price SM, Stanhope R, Garrett C, Preece MA, Trembath RC. The spectrum of Silver-Russell syndrome: a clinical and molecular genetic study and new diagnostic criteria. J Med Genet 1999;36:837-42.
12. Bartholdi D, Krajewska-Walasek M, Ounap K, et al. Epigenetic mutations of the imprinted IGF2-H19 domain in Silver-Russell syndrome (SRS): results from a large cohort of patients with SRS and SRS-like phenotypes. J Med Genet 2009;46:192-7. Crossref
13. Eggermann T, Gonzalez D, Spengler S, Arslan-Kirchner M, Binder G, Schönherr N. Broad clinical spectrum in Silver-Russell syndrome and consequences for genetic testing in growth retardation. Pediatrics 2009;123:e929-31. Crossref
14. Dias RP, Nightingale P, Hardy C, et al. Comparison of the clinical scoring systems in Silver-Russell syndrome and development of modified diagnostic criteria to guide molecular genetic testing. J Med Genet 2013;50:635-9. Crossref
15. Hitchins MP, Stanier P, Preece MA, Moore GE. Silver-Russell syndrome: a dissection of the genetic aetiology and candidate chromosomal regions. J Med Genet 2001;38:810-9. Crossref
16. Fuke T, Mizuno S, Nagai T, et al. Molecular and clinical studies in 138 Japanese patients with Silver-Russell syndrome. PLoS One 2013;8:e60105. Crossref
17. Bliek J, Terhal P, van den Bogaard MJ, et al. Hypomethylation of the H19 gene causes not only Silver-Russell syndrome (SRS) but also isolated asymmetry or an SRS-like phenotype. Am J Hum Genet 2006;78:604-14. Crossref
18. Bruce S, Hannula-Jouppi K, Peltonen J, Kere J, Lipsanen-Nyman M. Clinically distinct epigenetic subgroups in Silver-Russell syndrome: the degree of H19 hypomethylation associates with phenotype severity and genital and skeletal anomalies. J Clin Endocrinol Metab 2009;94:579-87. Crossref
19. Kotzot D. Maternal uniparental disomy 7 and Silver-Russell syndrome—clinical update and comparison with other subgroups. Eur J Med Genet 2008;51:444-51. Crossref
20. Binder G, Seidel AK, Martin DD, et al. The endocrine phenotype in Silver-Russell syndrome is defined by the underlying epigenetic alteration. J Clin Endocrinol Metab 2008;93:1402-7. Crossref
21. Azzi S, Salem J, Thibaud N, et al. A prospective study validating a clinical scoring system and demonstrating phenotypical-genotypical correlations in Silver-Russell syndrome. J Med Genet 2015;52:446-53. Crossref
22. But WM, Huen KF, Lee CY, Lam YY, Tse WY, Yu CM. An update on the indications of growth hormone treatment under Hospital Authority in Hong Kong. Hong Kong J Paediatr 2012;17:208-16.
23. Azzi S, Blaise A, Steunou V, et al. Complex tissue-specific epigenotypes in Russell-Silver Syndrome associated with 11p15 ICR1 hypomethylation. Hum Mutat 2014;35:1211-20. Crossref
24. Bullman H, Lever M, Robinson DO, Mackay DJ, Holder SE, Wakeling EL. Mosaic maternal uniparental disomy of chromosome 11 in a patient with Silver-Russell syndrome. J Med Genet 2008;45:396-9. Crossref
25. Brioude F, Oliver-Petit I, Blaise A, et al. CDKN1C mutation affecting the PCNA-binding domain as a cause of familial Russell Silver syndrome. J Med Genet 2013;50:823-30. Crossref
26. Ioannides Y, Lokulo-Sodipe K, Mackay DJ, Davies JH, Temple IK. Temple syndrome: improving the recognition of an underdiagnosed chromosome 14 imprinting disorder: an analysis of 51 published cases. J Med Genet 2014;51:495-501. Crossref
27. Kagami M, Mizuno S, Matsubara K, et al. Epimutations of the IG-DMR and the MEG3-DMR at the 14q32.2 imprinted region in two patients with Silver-Russell Syndrome–compatible phenotype. Eur J Hum Genet 2015;23:1062-7. Crossref
28. Hämäläinen RH, Mowat D, Gabbett MT, O’brien TA, Kallijärvi J, Lehesjoki AE. Wilms’ tumor and novel TRIM37 mutations in an Australian patient with mulibrey nanism. Clin Genet 2006;70:473-9. Crossref
29. van der Wal G, Otten BJ, Brunner HG, van der Burgt I. 3-M syndrome: description of six new patients with review of the literature. Clin Dysmorphol 2001;10:241-52. Crossref
30. Scott RH, Douglas J, Baskcomb L, et al. Methylation-specific multiplex ligation-dependent probe amplification (MS-MLPA) robustly detects and distinguishes 11p15 abnormalities associated with overgrowth and growth retardation. J Med Genet 2008;45:106-13. Crossref
31. Spengler S, Begemann M, Ortiz Brüchle N, et al. Molecular karyotyping as a relevant diagnostic tool in children with growth retardation with Silver-Russell features. J Pediatr 2012;161:933-42. Crossref

Management of health care workers following occupational exposure to hepatitis B, hepatitis C, and human immunodeficiency virus

Hong Kong Med J 2016 Oct;22(5):472–7 | Epub 26 Aug 2016
DOI: 10.12809/hkmj164897
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE  CME
Management of health care workers following occupational exposure to hepatitis B, hepatitis C, and human immunodeficiency virus
Winnie WY Sin, MB, ChB, FHKAM (Medicine); Ada WC Lin, MB, BS, FHKAM (Medicine); Kenny CW Chan, MB, BS, FHKAM (Medicine); KH Wong, MB, BS, FHKAM (Medicine)
Special Preventive Programme, Centre for Health Protection, Department of Health, Kowloon Bay Health Centre, Hong Kong
 
Corresponding author: Dr Kenny CW Chan (kcwchan@dh.gov.hk)
 
 Full paper in PDF
 
Abstract
Introduction: Needlestick injury or mucosal contact with blood or body fluids is well recognised in the health care setting. This study aimed to describe the post-exposure management and outcome in health care workers following exposure to hepatitis B, hepatitis C, or human immunodeficiency virus (HIV) during needlestick injury or mucosal contact.
 
Methods: This case series study was conducted in a public clinic in Hong Kong. All health care workers with a needlestick injury or mucosal contact with blood or body fluids who were referred to the Therapeutic Prevention Clinic of Department of Health from 1999 to 2013 were included.
 
Results: A total of 1525 health care workers were referred to the Therapeutic Prevention Clinic following occupational exposure. Most sustained a percutaneous injury (89%), in particular during post-procedure cleaning or tidying up. Gloves were worn in 62.7% of instances. The source patient could be identified in 83.7% of cases, but the infection status was usually unknown, with baseline positivity rates of hepatitis B, hepatitis C, and HIV of all identified sources, as reported by the injured, being 7.4%, 1.6%, and 3.3%, respectively. Post-exposure prophylaxis of HIV was prescribed to 48 health care workers, of whom 14 (38.9%) had been exposed to known HIV-infected blood or body fluids. The majority (89.6%) received HIV post-exposure prophylaxis within 24 hours of exposure. Drug-related adverse events were encountered by 88.6%. The completion rate of post-exposure prophylaxis was 73.1%. After a follow-up period of 6 months (or 1 year for those who had taken HIV post-exposure prophylaxis), no hepatitis B, hepatitis C, or HIV seroconversions were detected.
 
Conclusions: Percutaneous injury in the health care setting is not uncommon but post-exposure prophylaxis of HIV is infrequently indicated. There was no hepatitis B, hepatitis C, and HIV transmission via sharps or mucosal injury in this cohort of health care workers.
 
 
New knowledge added by this study
  • The risk of hepatitis B (HBV), hepatitis C (HCV), and human immunodeficiency virus (HIV) transmission following occupational sharps or mucosal injury in Hong Kong is small.
Implications for clinical practice or policy
  • Meticulous adherence to infection control procedures and timely post-exposure management prevents HBV, HCV, and HIV infection following occupational exposure to blood and body fluids.
 
 
Introduction
Needlestick injury or mucosal contact with blood or body fluids is well recognised in the health care setting. These incidents pose a small but definite risk for health care workers of acquiring blood-borne viruses, notably hepatitis B virus (HBV), hepatitis C virus (HCV), and human immunodeficiency virus (HIV). The estimated risk of contracting HBV infection through occupational exposure to known infected blood via needlestick injury varies from 18% to 30%, while that for HCV infection is 1.8% (range, 0%-7%).1 The risk of HIV transmission following percutaneous or mucosal exposure to HIV-contaminated blood is 0.3% and 0.09%, respectively.1 The risk is further affected by the type of exposure, body fluid involved, and infectivity of the source.
 
In Hong Kong, injured health care workers usually receive initial first aid and immediate management in the Accident and Emergency Department. They are then referred to designated clinics for specific post-exposure management. Currently, aside from staff of the Hospital Authority who are managed at two designated clinics post-exposure, all other health care workers from private hospitals, and government or private clinics and laboratories are referred to the Therapeutic Prevention Clinic (TPC) of the Integrated Treatment Centre, Department of Health. Since its launch in mid-1999, the TPC has provided comprehensive post-exposure management to people with documented percutaneous, mucosal, or breached skin exposure to blood or body fluids in accordance with the local guidelines set out by the Scientific Committee on AIDS and STI, and Infection Control Branch of Centre for Health Protection, Department of Health.2 The present study describes the characteristics and outcome of health care workers who attended the TPC from mid-1999 to 2013 following occupational exposure to blood or body fluids.
 
Methods
The study included all health care workers seen in the TPC from July 1999 to December 2013 following occupational exposure to blood or body fluids, who attended following secondary referral by an accident and emergency department of a public hospital. Using two standard questionnaires (Appendices 1 and 2), data were collected by the attending nurse and doctor during a face-to-face interview with each health care worker on the following: demography and occupation of the exposed client, type and pattern of exposure, post-exposure management, and clinical outcome.
 
Appendix 1. TPC First Consultation Assessment Form
 
Appendix 2. Therapeutic Prevention Clinic (TPC) human immunodeficiency virus (HIV) Post-exposure Prophylaxis Registry Form (to be completed on completion or cessation of post-exposure prophylaxis)
 
Details of the exposure, including type of exposure and the situation in which it occurred, were noted. The number of risk factors (see definitions below) for HIV transmission was counted for each exposure and further classified as high risk or low risk. Where known and reported by the injured party, hepatitis B surface antigen (HBsAg), HCV, and HIV status of the source were recorded.
 
The timing of the first medical consultation in the accident and emergency department, any prescription of HIV post-exposure prophylaxis (PEP), and the time since injury were noted. Exposed health care workers who received HIV PEP were reviewed at clinic visits every 2 weeks until completion of the 4-week course of treatment, and any treatment-related adverse effects were reported. Blood was obtained as appropriate at these visits for measurement of complete blood count, renal and liver function, and amylase, creatine kinase, fasting lipid, and glucose levels.
 
Apart from HIV PEP–related side-effects (reported and rated by patients as mild, moderate, or severe), the rate of completion of PEP, and number of HBV, HCV, and HIV seroconversions following the incident was also recorded. The HBsAg, anti-HBs, anti-HCV, and anti-HIV were checked at baseline and 6 months post-exposure to determine whether seroconversion had occurred. Those exposed to a known HCV-infected source or a source known to be an injecting drug user had additional blood tests 6 weeks post-exposure for liver function, anti-HCV, and HCV RNA. Additional HIV antibody testing at 3 and 12 months post-exposure was arranged for those who received HIV PEP. For those who contracted HCV infection from a source co-infected with HCV and HIV, further HIV testing was performed at 1 year post-exposure to detect delayed seroconversion.
 
Definitions
Health care workers included doctors and medical students, dentists and dental workers, nurses, midwives, inoculators, laboratory workers, phlebotomists, ward or clinic attendants, and workmen. Staff working in non–health care institutions (eg elderly home, hostels, and sheltered workshops) were excluded. Five factors were classified as high-risk exposure: (i) deep percutaneous injury, (ii) procedures involving a device placed in a blood vessel, (iii) use of a hollow-bore needle, (iv) device that was visibly contaminated with blood, and (iv) source person with acquired immunodeficiency syndrome (AIDS).3 Another five factors were classified as low-risk exposure: (i) moderate percutaneous injury, (ii) mucosal contact, (iii) contact with deep body fluids other than blood, (iv) source person was HIV-infected but not or not sure about the stage of AIDS, and (v) any other reason contributing to increased risk according to clinical judgement.
 
Results
From July 1999 to December 2013, 1525 health care workers (75-168 per year) with occupational exposure to HBV, HCV, or HIV were referred to the TPC (Fig). Females constituted 77% of all attendees. The median age was 33 years (range, 17-73 years). The majority came from the dental profession (36.8%) and nursing profession (33.4%), followed by ward/clinic ancillary staff (11.6%) and the medical profession (4.7%).
 

Figure. Referrals of health care workers with occupational exposure to Therapeutic Prevention Clinic and the post-exposure prophylaxis (PEP) prescription
 
Type and pattern of exposure
The majority of exposures occurred in a public clinic or laboratory (n=519, 34.0%), followed by public hospital (n=432, 28.3%), private clinic or laboratory (n=185, 12.1%), and private hospital (n=23, 1.5%). Most were a percutaneous injury (88.9%). Mucosal contact, breached skin contact, and human bite were infrequent (Table 1). Approximately 60% of the incidents occurred in one of the four situations: (a) cleaning/tidying up after procedures (the most common), (b) other bedside/treatment room procedures, (c) injection, including recapping of needles, or (d) blood taking/intravenous catheter insertion. The contact specimen was blood or blood products, blood-contaminated fluid, and saliva or urine in 30.6%, 5.8%, and 14.1% of the cases, respectively. The technical device involved was a hollow-bore needle in 48.1%, dental instrument in 20.7%, and lancet in 7.7%. More than 80% considered the injury superficial.
 

Table 1. Details of occupational exposure in health care workers
 
High-risk and low-risk factors were noted in 869 (57%) and 166 (11%) exposures, respectively. Blood taking/intravenous catheter insertion carried the highest risk among all the procedures, with a mean risk factor of 1.29 (Table 2). Gloves were used in 956 (62.7%) exposures, goggles/mask in 50 (3.3%), and gown/apron in 55 (3.6%). Nonetheless, 101 (6.6%) health care workers indicated that they did not use any personal protective equipment during the exposure.
 

Table 2. Risk factors in health care workers with higher-risk occupational exposure during various activities/procedures from 1999 to 2013
 
The source patient could be identified in 1277 (83.7%) cases but the infectious status was unknown in most. The baseline known positivity rate for HBV, HCV, and HIV of all identified sources was 7.4%, 1.6%, and 3.3%, respectively (Table 1).
 
Care and clinical outcome
Nearly half of the injured health care workers attended a medical consultation within 2 hours (n=720, 47.2%) and another 552 (36.2%) attended between 2 and 12 hours following exposure. The median time between exposure and medical consultation was 2.0 hours.
 
During the study period, 48 (3.1%) health care workers received HIV PEP for occupational exposure, ranging from zero to eight per year (Fig). One third received PEP within 2 hours of exposure, and the majority (89.6%) within 24 hours. The median time to PEP was 4.0 hours post-exposure (interquartile range, 2.0-8.1 hours). A three-drug regimen was prescribed in 85.7% of cases. The most common regimen was zidovudine/lamivudine/indinavir (39.6%), followed by zidovudine/lamivudine/ritonavir-boosted lopinavir (31.3%), and zidovudine/lamivudine (12.5%) [Table 3]. Upon consultation and risk assessment at the TPC, 36 (75%) workers had treatment continued from the accident and emergency department. Among them, the source was confirmed to be HIV-positive in 14 (38.9%) cases. Of the 35 clients with known outcome, drug-related adverse events were seen in 31 (88.6%) health care workers; more than half (n=18, 58.1%) of which were considered to be moderate or severe. Treatment-related side-effects led to early termination of PEP in eight (22.9%) health care workers. Excluding nine clients in whom prophylaxis was stopped when the source was established to be HIV-negative, 19 (73.1%) clients were able to complete the 28-day course of PEP. Of the 14 clients who sustained injury from an HIV-infected source patient, all received PEP but two did not complete the course; the completion rate was 85.7%.
 

Table 3. Post-exposure prophylaxis regimens of human immunodeficiency virus
 
At baseline, none of the injured health care workers tested positive for HCV or HIV, while 49 (3.2% of all health care workers seen in TPC) tested HBsAg-positive. Almost half of the health care workers (n=732, 48.0%) were immune to HBV (anti-HBs positive). After follow-up of 6 months (1 year for those who took PEP), no case of HBV, HCV, or HIV seroconversion was detected in this cohort.
 
Discussion
Health care workers may be exposed to blood-borne viruses when they handle sharps and body fluids. Thus, adherence to standard precautions of infection control is an integral component of occupational health and safety for health care workers. In this cohort, percutaneous injury with sharps during cleaning or tidying up after procedures remained the most common mechanism of injury. Many of these incidents could have been prevented by safer practice, for instance, by not recapping needles or by disposing needles directly into a sharps box after use. The use of gloves as part of standard precautions was suboptimal and greater emphasis on the importance of wearing the appropriate personal protective equipment should be made during staff training at induction and on refresher courses. Technical devices with safety needleless features may reduce sharps injuries. Improvement in the system (eg by placing a sharps box near the work area) or the workflow to minimise distraction may also help compliance with infection control measures.
 
Once exposure occurs, PEP is the last defence against HBV and HIV. For HBV infection, PEP with hepatitis B immunoglobulin followed by hepatitis B vaccination has long been the standard practice in Hong Kong. For HIV infection, the efficacy of PEP in health care workers following occupational exposure was demonstrated by a historic landmark overseas case-control study.3 Prescription of zidovudine achieved an 81% reduction in risk of HIV seroconversion following percutaneous exposure to HIV-infected blood.3 Local and international guidelines now recommend a combination of three antiretroviral drugs for PEP.2 4 5 6 In this cohort, although more than half of the exposures had higher risk factors for HIV acquisition, it was uncommon for the source patients to have known HIV infection (2.8% of these exposures). Thus, in accordance with the local guideline, PEP was not commonly prescribed. Nevertheless, PEP was prescribed in all 14 exposures to a known HIV-positive source and in other 34 exposures after risk assessment. Our experience is comparable with the health care service in the UK and US. In the UK, 78% of health care workers exposed to an HIV-infected source patient were prescribed PEP.7 In a report from the US, only 68% of health care workers with such exposure took PEP.8 For HCV, PEP with antiviral therapy is not recommended according to the latest guidelines from American Association for the Study of Liver Diseases/Infectious Diseases Society of America.9 In case seroconversion occurs and early treatment is considered desirable, these patients with acute hepatitis C can be treated with direct-acting antivirals using the same regimen recommended for chronic hepatitis C.
 
If indicated, HIV PEP should be taken as early as possible after exposure to achieve maximal effect. Initiation of PEP after 72 hours of exposure was shown to be ineffective in animal studies.10 The timing of PEP initiation in our cohort appeared to be less prompt (33.3% within 2 hours compared with more than 60% and 80% within 3 hours in the UK and US, respectively). Overall, however, 89.6% managed to start PEP within 24 hours, in line with experience in the UK or US. Health care workers should be reminded about post-exposure management and the need for timely medical assessment following occupational exposure. In the accident and emergency department, priority assessment should be given to health care workers exposed to blood-borne viruses. The median duration of PEP intake of 28 days was in line with the local guidelines. With the availability of newer drugs with fewer toxicities, the tolerance and compliance rate should improve.
 
Finally, using the estimated risk of HIV transmission with percutaneous injury of 0.3%, we would expect four HIV seroconversions in 1356 percutaneous exposures in TPC if all were exposed to HIV-infected blood. Because in most of these exposures the source HIV status was unknown and likely negative in this region of overall low HIV prevalence (approximately 0.1%11), the actual risk of HIV transmission was much lower in the health care setting of Hong Kong. This finding is confirmed by the fact that no HIV seroconversion occurred in this cohort. In addition, those with exposure of the highest risk received HIV PEP. In the UK, there were 4381 significant occupational exposures from 2002 to 2011, of which 1336 were exposures to HIV-infected blood or body fluid. No HIV seroconversions occurred among these exposures.7 In the US, there has been one confirmed case of occupational transmission of HIV in health care workers since 1999.12 Similarly, the local prevalence of HCV infection is low (<0.1% in new blood donors13), partly explaining the absence of HCV transmission in this cohort. In contrast, there were 20 cases of HCV seroconversion in health care workers reported between 1997 and 2011 in the UK.7 Hepatitis B is considered to be endemic in Hong Kong, with HBsAg positivity of 1.1% in new blood donors and 6.5% in antenatal women in 2013.13 Nonetheless, the HBV vaccination programme in health care workers coupled with HBV PEP has proven successful in preventing HBV transmission to health care workers. With concerted efforts in infection control and timely PEP, transmission of blood-borne viruses via sharps and mucosal injury in the health care setting is largely preventable.
 
There are several limitations to our study. First, data were collected from a single centre and based on secondary referral. We did not have data for other health care workers who had occupational exposure but who were not referred to the TPC for post-exposure management, or who were referred but did not attend. Thus, we were not able to draw any general conclusions on the true magnitude of the problem. Second, details of the exposure and the infection status of the source were self-reported by the exposed client and prone to bias and under-reporting.
 
Conclusions
Percutaneous injury with sharps during cleaning or tidying up after procedures was the most common cause of occupational exposure to blood or body fluids in this cohort of health care workers. The majority of source patients were not confirmed HIV-positive and HIV PEP was not generally indicated. Prescriptions of HIV PEP were appropriate and timely in most cases. There were no HIV, HBV, and HCV seroconversions in health care workers who attended the TPC following sharps or mucosal injury from mid-1999 to 2013.
 
Declaration
All authors have disclosed no conflicts of interest.
 
References
1. Pruss-Ustun A, Rapiti E, Hutin Y. Sharps injuries: Global burden of disease from sharps injuries to health-care workers (World Health Organization Environmental Burden of Disease Series, No. 3). Available from: http://www.who.int/quantifying_ehimpacts/publications/en/sharps.pdf?ua=1. Accessed 2 Feb 2016.
2. Scientific Committee on AIDS and STI (SCAS), and Infection Control Branch, Centre for Health Protection, Department of Health. Recommendations on the management and postexposure prophylaxis of needlestick injury or mucosal contact to HBV, HCV and HIV. Hong Kong: Department of Health; 2014.
3. Cardo DM, Culver DH, Ciesielski CA, et al. A case-control study of HIV seroconversion in health care workers after percutaneous exposure. Centers for Disease Control and Prevention Needlestick Surveillance Group. N Engl J Med 1997;337:1485-90. Crossref
4. Kuhar DT, Henderson DK, Struble KA, et al. Updated US Public Health Service guidelines for the management of occupational exposures to human immunodeficiency virus and recommendations for postexposure prophylaxis. Infect Control Hosp Epidemiol 2013;34:875-92. Crossref
5. UK Department of Health. HIV post-exposure prophylaxis: guidance from the UK Chief Medical Officers’ Expert Advisory Group on AIDS. 19 September 2008 (last updated 29 April 2015).
6. WHO Guidelines Approved by the Guidelines Review Committee. Guidelines on Post-Exposure Prophylaxis for HIV and the Use of Co-Trimoxazole Prophylaxis for HIV-Related Infections Among Adults, Adolescents and Children: Recommendations for a Public Health Approach: December 2014 supplement to the 2013 consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection. Geneva: World Health Organization; December 2014.
7. Eye of the Needle. United Kingdom surveillance of significant occupational exposures to bloodborne viruses in healthcare workers. London: Health Protection Agency; December 2012.
8. US Department of Health and Human Services, Centers for Disease Control and Prevention. The National Surveillance System for Healthcare Workers (NaSH): Summary report for blood and body fluid exposure data collected from participating healthcare facilities (June 1995 through December 2007).
9. American Association for the Study of Liver Diseases/Infectious Diseases Society of America. HCV guidance: recommendations for testing, managing, and treating hepatitis C (updated 24 February 2016). Available from: http://www.hcvguidelines.org. Accessed 5 May 2016.
10. Tsai CC, Emau P, Follis KE, et al. Effectiveness of postinoculation (R)-9-(2-phosphonylmethoxypropyl) adenine treatment for prevention of persistent simian immunodeficiency virus SIVmne infection depends critically on timing of initiation and duration of treatment. J Virol 1998;72:4265-73.
11. HIV surveillance report—2014 update. Department of Health, The Government of the Hong Kong Special Administrative Region; December 2015.
12. Joyce MP, Kuhar D, Brooks JT, Occupationally acquired HIV infection among health care workers—United States, 1985-2013. MMWR Morb Mortal Wkly Rep 2015;63:1245-6.
13. Surveillance of viral hepatitis in Hong Kong—2014 update. Department of Health, The Government of the Hong Kong Special Administrative Region; December 2015.

Violence against emergency department employees and the attitude of employees towards violence

Hong Kong Med J 2016 Oct;22(5):464–71 | Epub 26 Aug 2016
DOI: 10.12809/hkmj154714
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Violence against emergency department employees and the attitude of employees towards violence
Halil Í Çıkrıklar, MD1; Yusuf Yürümez, MD1; Buket Güngör, MD2; Rüstem Aşkın, MD2; Murat Yücel, MD1; Canan Baydemir, MD3
1 Department of Emergency Medicine, Sakarya University, Medical Faculty, Sakarya, Turkey
2 Psychiatry Clinic, Ministry of Health, Şevket Yilmaz Training and Research Hospital, Bursa, Turkey
3 Department of Biostatistics, Eskişehir Osmangazi University, Medical Faculty, Eskişehir, Turkey
 
Corresponding author: Dr Halil Í Çıkrıklar (halilcikriklar@hotmail.com)
 
 Full paper in PDF
 
Abstract
Introduction: This study was conducted to evaluate the occurrence of violent incidents in the workplace among the various professional groups working in the emergency department. We characterised the types of violence encountered by different occupation groups and the attitude of individuals working in different capacities.
 
Methods: This cross-sectional study included 323 people representing various professional groups working in two distinct emergency departments in Turkey. The participants were asked to complete questionnaires prepared in advance by the researchers. The data were analysed using the Statistical Package for the Social Sciences (Windows version 15.0).
 
Results: A total of 323 subjects including 189 (58.5%) men and 134 (41.5%) women participated in the study. Their mean (± standard deviation) age was 31.5 ± 6.5 years and 32.0 ± 6.9 years, respectively. In all, 74.0% of participants had been subjected to verbal or physical violence at any point since starting employment in a medical profession. Moreover, 50.2% of participants stated that they had been subjected to violence for more than 5 times. Among those who reported being subjected to violence, 42.7% had formally reported the incident(s). Besides, 74.3% of participants did not enjoy their profession, did not want to work in the emergency department, or would prefer employment in a non–health care field after being subjected to violence. According to the study participants, the most common cause of violence was the attitude of patients or their family members (28.7%). In addition, 79.6% (n=257) of participants stated that they did not have adequate safety protection in their working area. According to the study participants, there is a need for legal regulations to effectively deter violence and increased safety measures designed to reduce the incidence of violence in the emergency department.
 
Conclusion: Violence against employees in the emergency department is a widespread problem. This situation has a strong negative effect on employee satisfaction and work performance. In order to reduce the incidence of violence in the emergency department, both patients and their families should be better informed so they have realistic expectations as an emergency patient, deterrent legal regulations should be put in place, and increased efforts should be made to provide enhanced security for emergency department personnel. These measures will reduce workplace violence and the stress experienced by emergency workers. We expect this to have a positive impact on emergency health care service delivery.
 
 
New knowledge added by this study
  • The prevalence of violence against employees in emergency departments is high.
Implications for clinical practice or policy
  • Various measures can be implemented to reduce the incidence of violence in the emergency department.
 
 
Introduction
Violence, which has been ever present throughout the history of humanity, is defined as a threat or application of possessed power or strength towards another person, self, a group, or a community in order to cause injury and/or loss.1 The World Health Organization defines violence as “physical assault, homicide, verbal assault, emotional, sexual or racial harassment”.2
 
Workplace violence is defined as “abuse or attacks by one or more people on an employee within the workplace”.3 The health care field, which encompasses a wide range of employees, is among those in which workplace violence is common.4 Violence in the health care field is defined as “risk to a health worker due to threatening behaviour, verbal threats, physical assault and sexual assault committed by patients, patient relatives, or any other person”.3
 
According to the 2002 Workplace Violence in the Health Sector report, 25% of all violent incidents occurred in the health care sector.5 A study conducted in the United States determined that the risk of being subjected to violence is 16 times higher in the health care sector relative to other service sectors.6 Within the health care field, the department that is most frequently exposed to violence is the emergency department (ED).3 7 8 9 In this context, verbal and physical attacks by dissatisfied patients and their relatives are at the forefront.10 11
 
In this study we aimed to determine the extent of violence towards ED employees, analyse the attitude of the staff exposed to violence, and propose possible solutions.
 
Methods
This cross-sectional study was conducted in the EDs of Şevket Yilmaz Training and Research Hospital and Sakarya University between 1 July and 15 August 2012. Employees of ED—including doctors, nurses, health care officials, Emergency Medical Technicians (EMT), secretaries, laboratory technicians, radiology technicians, and security and cleaning staff—were included in the study. The questionnaire was prepared in accordance with previous publications3 10 11 and distributed to participants. All study participants were provided with information regarding the objectives of the study and were given instructions for completing the form. Of the 437 ED employees working in the two hospitals, 323 (73.9%) agreed to participate in the study and returned a completed questionnaire.
 
In addition to demographic information, the questionnaire contained questions about the number of violent incidents to which the individual had been subjected to, the type of violence, and whether the subject reported the incident or the reason for not reporting. Additional questions concerned a description of the person(s) responsible for the violence, the estimated age of the person(s) responsible for the violence, and the severity of the violence. We also asked participants about their attitude following the violent incident and suggestions for reducing violence in the ED.
 
This study was conducted in accordance with the principles of the 2008 Helsinki Declaration. The data were analysed using the Statistical Package for the Social Sciences (Windows version 15.0; SPSS Inc, Chicago [IL], US). Both proportions and mean ± standard deviation were used to represent the results. The Student’s t test, Pearson’s Chi squared test, and the Monte Carlo Chi squared tests were used to evaluate observed differences between groups and a P value of <0.05 was considered to represent a statistically significant difference.
 
Results
Among the 323 participants included in the study, 189 (58.5%) were male and 134 (41.5%) were female. The mean age of the male participants was 31.5 ± 6.5 years (range, 18-55 years) and that of the female participants was 32.0 ± 6.9 years (range, 20-52 years). There was no significant difference in the age distribution between the male and female participants (P=0.476).
 
When participants were asked if they had ever been exposed to verbal or physical violence in the workplace during the course of their career, 239 (74.0%) indicated that they had been subjected to one or the other, and 57 (17.6%) reported being subjected to both verbal and physical violence. Among the participants who were subjected to violence, 162 (67.8%) reported being the victim of more than five violent incidents (Table 1).
 

Table 1. Frequency of exposure to violence for male and female employees
 
The frequency of exposure to violence and the frequency of exposure to more than five violent incidents were similar for both men and women (P=0.185 and 0.104, respectively). Nonetheless, 25.9% of men reported both verbal and physical violence compared with only 6.0% of women, suggesting that the incidence of verbal and physical violence against men was greater than that against women (P<0.001) [Table 1].
 
We investigated the frequency of exposure to violence and the reported incidence of violence among various occupation groups (Table 2). The prevalence of exposure to violence was the highest among health care officials, EMTs, doctors, and security staff (P<0.001). In addition, only 102 (42.7%) out of 239 participants reported these violent incidents. It is notable that although the rate of incident reporting was 100% among security staff, none of the laboratory technicians reported the violent incidents (P<0.001).
 

Table 2. The distribution of occupation groups according to frequency of exposure to violence and rate of reporting
 
A total of 43 (31.4%) out of the 137 study participants who had been exposed to violence but had not reported the incident provided reasons (Table 3). The most common reason for not notifying the authorities was the perception that “no resolution will be reached”. Other important reasons included the heavy workload, not wanting to deal with the legal process, disregarding verbal attacks, understanding/sympathising with the emotions of patients and their relatives, fear of the threat from patients and their relatives, and not knowing how and where to report such incidents.
 

Table 3. Reasons for not reporting a violent incident (n=43)
 
A total of 248 participants responded to a question regarding the identity of the person who was to blame for the violence in ED in general (not their own experiences). Accordingly, 65.3% (n=162) stated that the patient’s relatives were responsible, 27.0% (n=67) stated that both the patients and their relatives were responsible, and 5.2% (n=13) placed sole responsibility on the patients. Six (2.4%) participants stated that they had been subjected to violence from other health care professionals.
 
When we asked individuals to estimate the age of the person(s) causing the violence that they had experienced, respondents who were exposed to multiple violent incidents answered this question by selecting multiple options and a total of 405 answers were obtained. As shown in Table 4, the majority (71.4%) of people responsible for violent incidents were young patients and patient relatives between the ages of 18 and 39 years.
 

Table 4. Estimated age of violent patients/family members (n=405)
 
When participants who were exposed to violence were asked who caused the violent incident, three (1.3%) participants stated that they themselves were responsible, five (2.1%) indicated that both sides were responsible, and the remaining 231 (96.7%) held the attacker responsible.
 
Participants were asked “What do you think is the reason for the violence?”. A total of 181 (56.0%) participants responded to this question. Some participants indicated more than one reason and a total of 188 answers were obtained. The top 10 most common responses to this question are given in descending order of frequency in Table 5. The most common cause of violence was ignorance and lack of education of patients and their relatives (28.7%), followed by the impatient attitudes and demanding priorities (23.4%) and the heavy workload and prolonged waiting time (10.6%).
 

Table 5. Answers to the questions: “What do you think is the reason for the violence?” and “How do you think violence against health care workers can be reduced?”
 
Participants were asked “How do you think violence against health care workers can be reduced?”. Some participants indicated more than one reason and a total of 509 answers were obtained. They considered the most important steps suggested to reduce violence against ED employees were the enactment of deterrent legislation (42.6%), increased security measures in hospitals (28.5%), and improved public education (16.7%) [Table 5].
 
Participants were asked about their attitude after experiencing violence. Some respondents gave more than one answer and a total of 498 answers were obtained. There were 27.1% of participants who did not enjoy working in their current profession, 25.7% wanted to work in non–health care field, and 21.5% did not want to work in the ED (Table 6).
 

Table 6. The attitude of health care workers after exposure to violence (n=498)
 
A total of 96.3% (n=311) of participants answered “Yes” to the question “Do you think that the violence against health care workers has increased in recent years?” Moreover, 90.7% (n=293) of the participants answered “Yes” to the question “Do news reports regarding violence against health care workers affect you?”. Then, when participants were asked “How does the news affect you?”, 64.7% (n=209) reported that they were “sad”, 44.3% (n=143) said they were “angry”, and 18.9% (n=61) said they were “scared”.
 
When participants were asked “Are there sufficient security measures in your workplace?”, only 66 (20.4%) participants gave a positive response, while 257 (79.6%) responded negatively. Among the 41 participants working as security staff, 33 (80.5%) found the safety measures inadequate. Thus, both the security staff and the general employee population agreed that hospital security was inadequate.
 
Discussion
Workplace violence is the most prevalent in the health care sector.4 The ED is the health care unit with the highest frequency of exposure to violence.3 7 8 9 According to several previous studies, the proportion of health care professionals who report prior exposure to violence in the workplace ranges from 45% to 67.6%.3 8 12 13 14 The rate of violence against ED employees (79%-99%), however, is higher than the average for the health care field.15 16 17
 
Emergency services are high-risk areas for patients and staff with regard to workplace violence18 19 20 21; 24-hour accessibility, a high-stress environment, and the apparent lack of trained security personnel are underlying factors.22 Workplace violence negatively affects the morale of health care workers and negatively affects the health and effectiveness of presentation.23 24 25 26
 
Our study was conducted among ED employees of two different hospitals. We investigated the rate of exposure to verbal or physical violence. Among the participants, 239 (74.0%) stated that they had been subjected to exposure to violence, and 57 (17.6%) reported having been exposed to both verbal and physical violence. A study in Turkey found that among ED employees, including nurses, in the İzmir province of Turkey, 98.5% of respondents had been subjected to verbal violence and 19.7% were exposed to physical violence.16 In another study conducted in Turkey, 88.6% of ED employees were subjected to verbal violence and 49.4% reported having been the victim of physical violence.17
 
In the present study, the rate of exposure to violence by profession was 95.7% among health care officials/EMTs, 90.7% among doctors, and 80.5% among security personnel. According to Ayrancı et al,3 exposure to violence was most common among practitioners (67.6%) and nurses (58.4%). In another study, Alçelik et al27 reported that nurses were exposed to violence 3 times more often than other health care professionals. In the present study, the frequency of exposure to violence among nurses was 62.7%, which is lower than that in other professional groups.
 
In the present study, the estimated age distribution of patients and patient relatives responsible for violent incidents showed that the majority (71.4%) were between 18 and 39 years of age. Other studies have reported that individuals prone to violence are generally younger than 30 years.28
 
Health care workers are often subjected to verbal and physical attacks from patients and their relatives who are dissatisfied with the services provided.10 11 In the present study, the most common cause of violence was the lack of education and ignorance of the patients and their relatives. Heavy workload was identified as another cause of workplace violence. Factors such as patient stress and anxiety regarding their condition, high expectations of the patients and their relatives, lack of effective institutional and legal arrangements aimed at preventing violence, and the failure to effectively document the extent of workplace violence contribute to the high frequency of violence.12 There are several factors that increase the risk of violence in health care institutions, including 24-hour service, long waiting time for patients, poor access to health care services, heavy workload, limited staff, inadequate employee training, and lack of security personnel.29 30
 
Previous studies conducted in Turkey revealed that 60% of ED employees who were exposed to violence did not report the incident. Among the reasons for not reporting was a lack of confidence in health care and executive leadership as well as the justice system.12 In the present study, the incident reporting rate was also low (42.7%) and the most important reason (34.9%) for not reporting was the perception that “no resolution will be reached”. Indeed, a study found that there were no repercussions for the attacker in 77% of instances.12 This suggests the perception that “no resolution will be reached” is a valid one.
 
A heavy workload consumes the energy of employees and reduces their ability to empathise with patients and tolerate violent situations. Sometimes verbal or physical conflicts may arise between a stressed patient who may be subject to long waiting times and exhausted and stressed health care workers. Training regarding communication with patients helps health care professionals to avoid these problems.31 Effective communication alone, however, is not sufficient and additional steps must be taken to reduce waiting time of patients. Previous studies have indicated that the most important reason for patient dissatisfaction in the ED is the waiting time.32 33 Yet, the most important reason for long waiting times is the heavy workload caused, in part, by the discourteous attitude of patients and their relatives. Studies have also shown that more than half of patients who present to the ED are not ‘emergency patients’.34 35 36 Further education regarding the definition of “emergency” and the practice of effective triage may reduce the heavy workload in the ED and associated violent incidents.
 
One previous study reported that verbal and physical attacks by patients and their relatives are the most important factors contributing to stress among ED employees.37 Consistent exposure to high-stress conditions resulting from exposure to verbal and physical violence results in both physical and mental exhaustion. As a result, a situation known commonly as ‘burnout syndrome’ emerges.38 39 The burnout syndrome is defined as holding a negative view of current events, frequent despair, and lost productivity and motivation.40 Reluctance among physicians to work in the ED is one consequence of burnout syndrome.41 In the present study, among the participants who were subjected to violence, 21.5% indicated that they wanted to work in a department other than the ED, while 25.7% stated a desire to work outside the health care field. In a study conducted in Canada, 18% of participants who had been exposed to violence stated that they did not want to work in the ED, and 38% wanted to work outside the health care field.9 Others indicated that they had quitted their jobs because of workplace stress.9 In the present study, 10.4% of ED employees stated that they were afraid of patients and their relatives. In the same Canadian study, 73% of respondents stated that after experiencing violence they were afraid of patients.9 In our study, 96.3% of respondents thought that there had been an increase in violence against ED health care workers in recent years. Moreover, 79.6% of respondents stated that the safety measures in their institutions were insufficient. The participants in the present study suggested that the preparation of deterrent legislation, increased security measures, and efforts to better educate the general population regarding the appropriate use of ED resources will help to reduce violence against health care workers.
 
Limitations
The study was carried out in only two hospitals in Turkey that may not be representative of all hospitals. In addition, participants could decide whether or not to answer all questions and some questionnaires were incomplete. The response rate was only 74% and this might give rise to self-selection bias, that is, those who did not respond may have had a higher (or lower) exposure to violence than those who responded. Hence, the various percentages reported in this paper might be over- or under-estimated.
 
Conclusion
The results of the current study as well as those of earlier studies indicate that the prevalence of violence against ED employees is high. Factors such as patient and stress of health care provider, prolonged waiting times due to overcrowding in the ED, negative attitude of discourteous patients and their relatives, insufficient security measures, and the lack of sufficiently dissuasive legal regulations may contribute to increased violence in the ED. These factors in turn increase stress among ED employees, reduce job satisfaction, and lower the quality of services provided. Measures to decrease the workload in the ED and shorten waiting time of patients, the adoption of legal policies that deter violent behaviour, and increased security measures in health care facilities should be reassessed. Steps should be taken to educate the public in order to reduce violence against health care workers.
 
Declaration
All authors have disclosed no conflicts of interest.
 
References
1. Kocacik F. On violence [in Turkish]. Cumhuriyet Univ J Econ Adm Sci 2001;2:1-2.
2. Violence and injury prevention. Available from: http://www.who.int/violence_injury_prevention/violence/activities/workplace/documents/en/index.html. Accessed Nov 2012.
3. Ayrancı Ü, Yenilmez Ç, Günay Y, Kaptanoğlu C. The frequency of being exposed to violence in the various health institutions and health profession groups. Anatol J Psychiatry 2002;3:147-54.
4. Wells J, Bowers L. How prevalent is violence towards nurses working in general hospitals in the UK? J Adv Nurs 2002;39:230-40. Crossref
5. Workplace violence in the health sector. Framework guidelines for addressing workplace violence in the health sector. Available from: http://www.ilo.org/wcmsp5/groups/public/---ed_dialogue/---sector/documents/publication/wcms_160912.pdf. Accessed Nov 2012.
6. Kingma M. Workplace violence in the health sector: a problem of epidemic proportion. Int Nurs Rev 2001;48:129-30. Crossref
7. Gülalp B, Karcıoğlu, Köseoğlu Z, Sari A. Dangers faced by emergency staff: experience in urban centers in southern Turkey. Ulus Travma Acil Cerrahi Derg 2009;15:239-42.
8. Lau J, Magarey J, McCutcheon H. Violence in the emergency department: a literature review. Aust Emerg Nurs J 2004;7:27-37. Crossref
9. Fernandes CM, Bouthillette F, Raboud JM, et al. Violence in the emergency department: a survey of health care workers. CMAJ 1999;161:1245-8.
10. Yanci H, Boz B, Demirkiran Ö, Kiliççioğlu B, Yağmur F. Medical personal subjected to the violence in emergency department—enquiry study. Turk J Emerg Med 2003;3:16-20.
11. Sucu G, Cebeci F, Karazeybek E. Violence by patient and relatives against emergency service personnel. Turk J Emerg Med 2007;7:156-62.
12. Çamci O, Kutlu Y. Determination of workplace violence toward health workers in Kocaeli. J Psychiatr Nurs 2011;2:9-16.
13. Stirling G, Higgins JE, Cooke MW. Violence in A&E departments: a systematic review of the literature. Accid Emerg Nurs 2001;9:77-85. Crossref
14. Sönmez M, Karaoğlu L, Egri M, Genç MF, Günes G, Pehlivan E. Prevalence of workplace violence against health staff in Malatya. Bitlis Eren Univ J Sci Technol 2013;3:26-31.
15. Stene J, Larson E, Levy M, Dohlman M. Workplace violence in the emergency department: giving staff the tools and support to report. Perm J 2015;19:e113-7. Crossref
16. Senuzun Ergün F, Karadakovan A. Violence towards nursing staff in emergency departments in one Turkish city. Int Nurs Rev 2005;52:154-60. Crossref
17. Boz B, Acar K, Ergin A, et al. Violence toward health care workers in emergency departments in Denizli, Turkey. Adv Ther 2006;23:364-9. Crossref
18. Joa TS, Morken T. Violence towards personnel in out-of-hours primary care: a cross-sectional study. Scand J Prim Health Care 2012;30:55-60. Crossref
19. Magnavita N, Heponiemi T. Violence towards health care workers in a Public Health Care Facility in Italy: a repeated cross-sectional study. BMC Health Serv Res 2012;12:108. Crossref
20. Arimatsu M, Wada K, Yoshikawa T, et al. An epidemiological study of work-related violence experienced by physicians who graduated from a medical school in Japan. J Occup Health 2008;50:357-61. Crossref
21. Taylor JL, Rew L. A systematic review of the literature: workplace violence in the emergency department. J Clin Nurs 2011;20:1072-85. Crossref
22. Gacki-Smith J, Juarez AM, Boyett L, Homeyer C, Robinson L, MacLean SL. Violence against nurses working in US emergency departments. J Nurs Adm 2009;39:340-9. Crossref
23. Kowalenko T, Gates D, Gillespie GL, Succop P, Mentzel TK. Prospective study of violence against ED workers. Am J Emerg Med 2013;31:197-205. Crossref
24. Position statement: violence in the emergency care setting. Available from: https://www.ena.org/government/State/Documents/ENAWorkplaceViolencePS.pdf. Accessed Nov 2012.
25. Workplace violence. Washington, DC: United States Department of Labor; 2013. Available from: http://www.osha.gov/SLTC/workplaceviolence/index.html. Accessed Nov 2012.
26. Adib SM, Al-Shatti AK, Kamal S, El-Gerges N, Al-Raqem M. Violence against nurses in healthcare facilities in Kuwait. Int J Nurs Stud 2002;39:469-78. Crossref
27. Alçelik A, Deniz F, Yeşildal N, Mayda AS, Ayakta Şerifi B. Health survey and life habits of nurses who work at the medical faculty hospital at AIBU [in Turkish]. TAF Prev Med Bull 2005;4:55-65.
28. Young GP. The agitated patient in the emergency department. Emerg Med Clin North Am 1987;5:765-81.
29. Stathopoulou HG. Violence and aggression towards health care professionals. Health Sci J 2007;2:1-7.
30. Hoag-Apel CM. Violence in the emergency department. Nurs Manage 1998;29:60,63.
31. Yardan T, Eden AO, Baydın A, Genç S, Gönüllü H. Communication with relatives of the patients in emergency department. Eurasian J Emerg Med 2008;7:9-13.
32. Al B, Yıldırım C, Togun İ, et al. Factors that affect patient satisfaction in emergency department. Eurasian J Emerg Med 2009;8:39-44.
33. Yiğit Ö, Oktay C, Bacakoğlu G. Analysis of the patient satisfaction forms about Emergency Department services at Akdeniz University Hospital. Turk J Emerg Med 2010;10:181-6.
34. Kiliçaslan İ, Bozan H, Oktay C, Göksu E. Demographic properties of patients presenting to the emergency department in Turkey. Turk J Emerg Med 2005;5:5-13.
35. Ersel M, Karcıoğlu Ö, Yanturali S, Yörüktümen A, Sever M, Tunç MA. Emergency Department utilization characteristics and evaluation for patient visit appropriateness from the patients’ and physicians’ point of view. Turk J Emerg Med 2006;6:25-35.
36. Aydin T, Aydın ŞA, Köksal Ö, Özdemir F, Kulaç S, Bulut M. Evaluation of features of patients attending the Emergency Department of Uludağ University Medicine Faculty Hospital and emergency department practices. Eurasian J Emerg Med 2010;9:163-8. Crossref
37. Kalemoglu M, Keskin O. Evaluation of stress factors and burnout in the emergency department staff [in Turkish]. Ulus Travma Derg 2002;8:215-9.
38. Ferns T, Stacey C, Cork A. Violence and aggression in the emergency department: Factors impinging on nursing research. Accid Emerg Nurs 2006;14:49-55. Crossref
39. Keser Özcan N, Bilgin H. Violence towards healthcare workers in Turkey: A systematic review [in Turkish]. Turkiye Klinikleri J Med Sci 2011;31:1442-56. Crossref
40. Maslach C. Burned-out. Hum Behav 1976;5:16-22.
41. Dwyer BJ. Surviving the 10-year ache: emergency practice burnout. Emerg Med Rep 1991;23:S1-8.

Population-based survey of the prevalence of lower urinary tract symptoms in adolescents with and without psychotropic substance abuse

Hong Kong Med J 2016 Oct;22(5):454–63 | Epub 12 Aug 2016
DOI: 10.12809/hkmj154806
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Population-based survey of the prevalence of lower urinary tract symptoms in adolescents with and without psychotropic substance abuse
YH Tam, FHKAM (Surgery)1; CF Ng, FHKAM (Surgery)2; YS Wong, FHKAM (Surgery)1; Kristine KY Pang, FHKAM (Surgery)1; YL Hong, MSc1; WM Lee, MSc2; PT Lai, BN2
1 Division of Paediatric Surgery and Paediatric Urology, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong
2 Division of Urology, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Shatin, Hong Kong
 
Corresponding author: Dr YH Tam (pyhtam@surgery.cuhk.edu.hk)
 
 Full paper in PDF
 
Abstract
Objective: To investigate the prevalence of lower urinary tract symptoms in adolescents and the effects of psychotropic substance use.
 
Methods: This was a population-based cross-sectional survey using a validated questionnaire in students from 45 secondary schools in Hong Kong randomly selected over the period of January 2012 to January 2014. A total of 11 938 secondary school students (response rate, 74.6%) completed and returned a questionnaire that was eligible for analysis. Individual lower urinary tract symptoms and history of psychotropic substance abuse were documented.
 
Results: In this study, 11 617 non-substance abusers were regarded as control subjects and 321 (2.7%) were psychotropic substance users. Among the control subjects, 2106 (18.5%) had experienced at least one lower urinary tract symptom with urinary frequency being the most prevalent symptom (10.2%). Females had more daytime urinary incontinence (P<0.001) and males had more voiding symptoms (P=0.01). Prevalence of lower urinary tract symptoms increased with age from 13.9% to 25.8% towards young adulthood and age of ≥18 years (P<0.001). Among the substance users, ketamine was most commonly abused. Substance users had significantly more lower urinary tract symptoms than control subjects (P<0.001). In multivariate analysis, increasing age and psychotropic substance abuse increased the odds for lower urinary tract symptoms. Non-ketamine substance users and ketamine users were respectively 2.8-fold (95% confidence interval, 2.0-3.9) and 6.2-fold (4.1-9.1) more likely than control subjects to develop lower urinary tract symptoms. Females (odds ratio=9.9; 95% confidence interval, 5.4-18.2) were more likely to develop lower urinary tract symptoms than males (4.2; 2.5-7.1) when ketamine was abused.
 
Conclusions: Lower urinary tract symptoms are prevalent in the general adolescent population. It is important to obtain an accurate history regarding psychotropic substance use when treating teenagers with lower urinary tract symptoms.
 
 
New knowledge added by this study
  • Prevalence of lower urinary tract symptoms (LUTS) increases consistently from onset of adolescence towards adulthood. Psychotropic substance abuse, particularly ketamine, is associated with an increased risk of developing LUTS in adolescents. Girls are more susceptible than boys if ketamine is abused.
Implications for clinical practice or policy
  • It is important to obtain an accurate history regarding psychotropic substance use when treating teenagers with LUTS.
 
 
Introduction
Lower urinary tract symptoms (LUTSs) are prevalent worldwide. An estimated 45.2% of the 2008 worldwide population aged ≥20 years are affected by at least one LUTS.1 Large-scale population-based survey has reported that LUTS prevalence increases with advancing age up to 60% at the age of 60 years.2 Evaluation and treatment of LUTS for the general population have incurred significant costs to the health care system. In children, the association of LUTS with urinary tract infection, persistent vesicoureteric reflux, renal scarring, and constipation have drawn substantial attention over the years.3 4 Among various LUTSs, urinary incontinence (UI) has been most extensively investigated in children with the reported prevalence varying from 1.8% to 20%.5 Previous studies of the prevalence of individual LUTS using the International Children’s Continence Society (ICCS) definitions6 have focused primarily on pre-adolescent children in primary schools.7 8 9 10 To date, no large-scale studies have investigated the prevalence of LUTSs in adolescents.
 
Psychotropic substance use among adolescents is a growing concern worldwide and creates psychosocial, security, and health care issues. In recent years, ketamine abuse has been found to cause severe LUTSs and Hong Kong is one of the earliest countries/regions to report the newly established clinical entity of ketamine-associated uropathy.11 12 13 Ketamine is the most popular psychotropic substance being abused by people aged <21 years in our society.14 The aim of the present study was to investigate the prevalence of LUTSs in our adolescents and the differences between those with and without psychotropic substance use.
 
Methods
Study design, sample size estimation, and participant recruitment
This was a cross-sectional questionnaire survey that recruited adolescents from secondary schools serving Hong Kong local residents during the period of January 2012 to January 2014. There were almost 500 secondary schools in Hong Kong serving approximately 470 000 adolescents in 2009/10. Based on the data of children and young adults available in the literature,2 9 we assumed the prevalence of LUTSs among our adolescents to be 20%. A study sample of 6050 participants would be required to allow an error of ±1%. Government sources suggested 2.3% of our secondary school students used psychotropic substance in 2011/12.15 We assumed the prevalence of LUTSs among those secondary students using psychotropic substance was 15% higher than in normal subjects. In order to detect a difference with a type 1 error of 0.05 and a power of 0.8, a sample size of 4500 participants would be required. Based on the above two assumptions and a predicted response rate of 50% to 60%, we determined that a potential target of not less than 10 000 participants would be required.
 
In the selection of schools we included all government, aided, and Direct Subsidy Scheme schools. Private international schools and special schools were excluded. Co-educational, boys’, and girls’ schools were included. The list of secondary schools was provided by the Education Bureau and schools were grouped into 18 geographical districts. As the prevalence of psychotropic substance use might vary significantly between schools, we arbitrarily determined to recruit participants from not less than 8% to 10% of the secondary schools in order to reduce the sampling bias.
 
The random selection process started with drawing a district followed by a school within the selected district. Based on a rough estimation of population distribution, we intended to select schools from Hong Kong Island (HKI), Kowloon (Kln), and New Territories (NT) in an approximate ratio of 1:2:3. We invited the selected schools to participate in the study. If the invitation was declined, the next school following the drawing sequence would be contacted. The above procedure was repeated until the target sampling size was reached. Finally, 45 out of 121 schools were selected and approached, and agreed to participate in the study (HKI, n=7; Kln, n=13; NT, n= 25) giving a potential target of 16 000 participants.
 
The grades/classes of students participating in the survey from each school were not randomly selected but were determined after discussion and mutual agreement with the school management. In order to avoid the possible bias of intentional selection or exclusion of a particular class of students, school management was invited to express their preferences about which grade/grades of students would participate provided that all students of the selected grade/grades participated. Although we tried to avoid over-representation of a particular grade of students by making some suggestions to the school management, their preferences were always respected and accepted. Of the 45 participating schools, we recruited two or three grades of Form 1-3 students, two or three grades of Form 4-6 students, and all the students in 18, 10, and 8 schools, respectively. In the remaining nine schools, we recruited only one grade of their students.
 
Study measures
The measuring tool was an anonymous self-reported questionnaire accompanied by an information sheet. In both the information sheet and the questionnaire, we stated clearly that participation in the study was voluntary and consent to participate was presumed on receipt of a completed questionnaire that was returned in the envelope provided. Individuals who did not consent to participate were told to disregard the questionnaire. The questionnaire consisted of three parts: demographic data on gender and age, LUTS assessment, and history of psychotropic substance use (Appendix).
 

Appendix. The questionnaire
 
Age was divided into four categories: <13, 13-15, 16-17, and ≥18 years. Participants were asked to respond to an 8-item LUTS assessment that included storage symptoms (urinary frequency, urgency, nocturia, and daytime UI), voiding symptoms (intermittent stream, straining, and dysuria), and post-micturition symptom (incomplete emptying). The recall period was the last 4 weeks. The LUTS questions were adapted from the Hong Kong Chinese version of International Prostate Symptom Score questionnaire that has been validated to assess LUTSs in our local adult population.16 We believed that the level of comprehension of most of our adolescent participants in secondary education was close to that of an average adult. The response options for most of the LUTSs were on a 6-point Likert scale: “never”, “seldom (<20% of the time)”, “sometimes (20-50% of the time)”, “often (50% of the time)”, “always (>50% of the time)”, and “almost every time”. Any LUTS with frequency threshold of ‘≥20% of the time’ was defined as being present in the study subject. Daytime UI and nocturia were assessed on a different 5-point Likert scale according to their frequency. Daytime UI and nocturia were defined as present if the study subject had ≥1 to 3 times per month and ≥2 times per night, respectively.2 17
 
Responses to questions on psychotropic substance use were dichotomised as either “yes” or “no”. Those with positive responses were directed to questions on the type of substance being abused, which included ketamine, ecstasy, methamphetamine, cough mixture, marijuana, and others. Participants were allowed to indicate more than one substance. According to the response to questions on psychotropic substance use, the participants were classified as control subjects or psychotropic substance users. The psychotropic substance users were further subdivided into either ketamine users or non-ketamine users.
 
Statistical analysis
The responses to each LUTS were dichotomised as “present” versus “absent” and prevalence rate for each LUTS was expressed in percentage with 95% confidence interval (CI). Missing data were excluded for analysis. Chi squared and trend tests were performed in univariate analysis to compare prevalence differences between groups divided by gender, age, and psychotropic substance use. Using the outcome of “at least one LUTS”, which was dichotomised into “yes” or “no”, a binary logistic regression model using enter method was set up to investigate risk factors including gender, age, and psychotropic substance use. Odds ratio (OR) of “at least one LUTS” was estimated with 95% CI for the potential risk factors. A P value of <0.05 was considered to be significant.
 
The study protocol was approved by the Joint CUHK-NTEC Clinical Research Ethics Committee.
 
Results
A total of 16 000 questionnaires were sent to schools and 11 938 were returned (estimated response rate, 74.6%) that were eligible for analysis in the study. The response rate was estimated since the number of questionnaires delivered to each school was not necessarily equal to the number of students of that school who received the questionnaire. The conduction of the survey at schools was not supervised. We were uncertain if students absent from school would receive our questionnaire. The number of questionnaires requested by each school was always rounded off to the nearest 10 and not necessarily equal to the actual number of students in the selected classes. It seems logical to assume that the actual number of students who received our questionnaires was less than 16 000 and the actual response rate might be higher. There were similar numbers of males (n=6040) and females (n=5819) among the participants who responded to the question on gender. Among the 11 938 participants, 11 617 did not report use of any psychotropic substances and were defined as control subjects; 321 (2.7%) participants reported to have used one or more types of psychotropic substance were defined as substance users.
 
Of 11 617 control subjects, 2106 (18.5%; including only the valid subjects) had experienced at least one LUTS with the symptom frequency of ‘≥20% of the time’ in the last 4 weeks (Table 1). The most prevalent LUTSs were urinary frequency (10.2%), incomplete emptying (5.4%), and nocturia ≥2 times per night (4.4%). Daytime UI ≥1 to 3 times per month was reported by 3.7% of control subjects. Females had more daytime UI than males (5.2% vs 2.2%; P<0.001), while males had significantly more voiding symptoms and incomplete emptying. There was significant increase in the prevalence of all LUTSs except for daytime UI across the age-groups from <13 years to the young adulthood age-group of ≥18 years (Table 2).
 

Table 1. Prevalence of LUTS in control subjects and comparison by gender
 

Table 2. Comparison of LUTS in control subjects by age
 
Compared with control subjects, the psychotropic substance users experienced significantly more LUTSs in all areas (Table 3). Of the 321 substance abusers, 305 responded to the question about types of psychotropic substance abused. Ketamine was the most commonly abused substance (n=139; 45.6%), followed by cough mixture (n=96; 31.5%), ecstasy (n=77; 25.2%), methamphetamine (n=76; 24.9%), and marijuana (n=70; 23.0%). Of ketamine users, 60.7% had at least one LUTS. Comparing the ketamine users with other non-ketamine substance users, the former experienced significantly more LUTSs in all areas except for daytime UI, though for which a higher prevalence was still observed. Female ketamine users appeared to be more affected by LUTS than males (Table 4).
 

Table 3. Comparison of control subjects with psychotropic substance users
 

Table 4. Comparison of ketamine users with non-ketamine substance users, and male with female ketamine users
 
In multivariate analysis, increasing age and psychotropic substance use were found to increase the odds for experiencing at least one LUTS. With reference to age of <13 years, the ORs of experiencing at least one LUTS at age 13-15, 16-17, and ≥18 years were 1.3 (95% CI, 1.1-1.5), 1.7 (95% CI, 1.4-2.0), and 2.1 (95% CI, 1.7-2.7), respectively. With reference to the control subjects, the ORs of experiencing at least one LUTS were 2.8 (95% CI, 2.0-3.9) for those who used substances other than ketamine, and 6.2 (95% CI, 4.1-9.1) for those who used ketamine. When assessing the two genders separately in multivariate analysis, female ketamine users were 9.9-fold (95% CI, 5.4-18.2) and male ketamine users were 4.2-fold (95% CI, 2.5-7.1) more likely than their non-abuser counterparts to develop LUTSs.
 
Discussion
Large-scale population-based surveys of LUTS prevalence have been conducted in adults.2 17 Recently a few paediatric studies using the ICCS definitions have reported LUTS prevalence in children varying from 9.3% to 46.4%.7 8 9 The wide variation in prevalence can be attributed to the differences in the study population, questions used to assess LUTS, and the criteria to define the presence of symptoms. Vaz et al8 reported a prevalence of 21.8% in 739 Brazilian children aged 6 to 12 years while Yüksel et al7 found 9.3% of their 4016 Turkish children aged 6 to 15 years had LUTSs. In both studies, the investigators used validated scoring systems for a combination of LUTSs being assessed and pre-determined cut-off points in the total scores to define the presence or absence of LUTS.7 8 In contrast, Chung et al9 investigated 16 516 Korean children aged 5 to 13 years by measuring the presence of individual LUTS and reported the highest prevalence of 46.4% experiencing at least one LUTS. The high prevalence rate in the Korean study can be partly explained by their methodology wherein the responses to the LUTS questions were dichotomised into “yes” or “no” and a positive symptom was defined without considering its frequency.9
 
To the best of our knowledge, the present study is the first large-scale prevalence study focused on adolescents. We used a similar methodology to other major adult studies to measure each LUTS individually and define its presence by a frequency threshold of ‘≥20% of the time’.2 17 18 19 We agree with others that using a scoring system to define LUTS in a prevalence study may not reflect the true impact of individual LUTS as it is possible that a highly prevalent symptom may happen alone and the summed score may not reach the threshold.17
 
In our adolescents without any substance abuse, 18.5% experienced at least one LUTS. Our finding suggests that LUTS prevalence in adolescents appears to be lower than that in young adults. Previous studies including two conducted in Chinese populations have reported that 17% to 42% of men and women aged 18 to 39 years experience at least one LUTS.2 18 19 Notably, LUTS prevalence increased with age during adolescence from 13.9% in those <13 years to 25.8% in those aged ≥18 years in this study. In children, the prevalence of LUTS peaks at age 5 to 7 years and then declines with increasing age up to 13 to 14 years.7 8 9 10 20 The decline in prevalence has been attributed to the maturation of urinary bladder function along with the growth and development of children. Our study is the first to provide evidence that LUTS prevalence rises from the trough at the onset of adolescence and continues to increase throughout adolescence into adulthood. Our reported prevalence of 25.8% in those participants aged ≥18 years is in agreement with the trend in young adults reported elsewhere.2 18 19
 
Little is known in the existing literature regarding the trend of LUTS prevalence from adolescence to adulthood. In a Finnish study of 594 subjects aged 4 to 26 years, the authors reported that individuals aged 18 to 26 years had more urgency than the other two age-groups of 8 to 12 years and 13 to 17 years.10 Although adolescence spans less than a decade, it is unique with rapid physical, psychological, and developmental changes. Reasons for the increase in LUTS prevalence from adolescence to young adulthood are largely unknown but likely to be multifactorial. Changes in lifestyle, altered micturition behaviour, habitual postponement of micturition, unhealthy bowel habits, attitudes to the use of school toilets, anxiety associated with academic expectations, or worsening of relationships with family may all contribute to newly developed LUTS during adolescence. Further studies are warranted to investigate this phenomenon.
 
Our findings that storage symptoms were more prevalent than voiding symptoms are in agreement with the reported results in young adults.2 18 19 Urinary frequency (10.2%) and nocturia ≥2 times per night (4.4%) were the two most prevalent storage symptoms among the control subjects. We agree with others that nocturia once per night is very common in the general population and using the threshold of nocturia ≥2 times per night as LUTS is more appropriate.2 17 18 19 Only 2.9% of our control subjects had urgency suggestive of overactive bladder (OAB) according to ICCS definitions,6 in contrast to 12% of Korean children aged 13 years.20 Children with OAB may have urinary frequency in addition to urgency. The much lower prevalence of urgency than urinary frequency in our study suggests that many of our study subjects had urinary frequency unrelated to OAB. Glassberg et al21 found over 70% of their paediatric patients with dysfunctional voiding (DV) and primary bladder neck dysfunction (PBND) experienced urinary frequency; DV and PBND are also associated with high residual urine volume. Our finding that the feeling of incomplete emptying (5.4%) was the second most prevalent LUTS suggests that in some participants urinary frequency was secondary to incomplete bladder emptying associated with DV or PBND.
 
In our study, male non-substance users experienced more voiding symptoms while females had more daytime UI. Literature has consistently found female gender to be a risk factor for daytime UI in children.5 22 23 Our finding suggests that the gender association with daytime UI extends from childhood to adolescence. There are inconsistencies in the paediatric literature with respect to gender differences in voiding symptoms. Kyrklund et al10 found more voiding symptoms in boys than girls only in the age-group of 4 to 7 years, while such difference was not noted by others.8 24
 
Psychotropic substance use increased the risk of LUTS in our adolescents. Notably, 60% of our adolescents who abused ketamine had experienced at least one LUTS and had high prevalence rates of 28% to 47% in all areas of LUTS. Our finding that 2.7% of our participants abused psychotropic substances is consistent with the latest figure of 2.3% estimated by our government in its survey conducted in 2011/12.15 Ketamine-associated uropathy has emerged as a new clinical entity in our society since 2007.13 This chemically induced cystitis as a result of the urinary metabolites of ketamine is associated with severe LUTS with the possible consequence of irreversible bladder damage.12 25 Little information is available in the medical literature about the prevalence of LUTS among ketamine users. An online survey conducted in the UK reported a prevalence of 26.6% of at least one LUTS in the last 12 months among 1285 participants who had illicitly used ketamine.26 The LUTS prevalence is likely influenced by variation in dose and frequency of ketamine use of the study population. We have recently reported that both the dose and frequency of ketamine use and female gender are associated with the severity of the LUTS at presentation among the young patients who sought urological treatment for ketamine-associated uropathy.25 In the present study, female ketamine users were at a higher risk of developing LUTS than males. This observation is in agreement with our previous findings and our postulation that females appear to be more susceptible to the chemically induced injury following illicit use of ketamine for unknown reasons.25
 
Non-ketamine substance users also experienced more LUTSs than the control subjects in this study although the prevalence was not as high as that of ketamine users. Most recently Korean investigators have reported a 77% prevalence rate of LUTS among a group of young methamphetamine (also known as ‘ice’) users, and suggested that a pathological dopaminergic mechanism plays a predominant role in methamphetamine-associated LUTS.27 There has been a rising trend of using methamphetamine in recent years and it is now the second most popular psychotropic substance abused by youths aged <21 years in our community.14 It would not be surprising if we encountered more and more young patients presenting with LUTS associated with methamphetamine use in the foreseeable future.
 
Limitations of this study
There was potential bias in the sampling process as almost two thirds of the schools that we selected and approached refused to participate, the grades of the participants were not randomly selected, and non-response rate was approximately 20%. Young participants of lower grades may not be able to comprehend the LUTS questions that were designed for adults. Nevertheless, our finding of 2.7% of psychotropic substance use appears to be consistent with the 2.3% reported by the 2011/12 government survey in over 80 000 secondary school students.15 We did not study other potential risk factors that may be associated with LUTS in adolescents such as bowel function, urinary tract infection, stressful events, lifestyle, and toilet environment. The 0.5% to 2% missing data in each of the LUTS questions, though small, may still affect the estimated prevalence of each LUTS among our control subjects. Although daytime UI was not a prevalent symptom, the fact that less than half of the participants were asked this question because of a printing error may underestimate the overall prevalence of experiencing at least one LUTS among different subgroups. The 4-week recall period only allowed crude assessment of LUTS. A more-prevalent symptom may not necessarily cause more inconvenience than a less-prevalent symptom. How each individual LUTS concerned the participant and how different the substance abusers and non-substance abusers were concerned by the LUTS were not investigated in this study. Therefore individuals, particularly the non-substance abusers, who reported the experience of LUTS did not necessarily suffer from any established lower urinary tract conditions that warranted medical attention. The dose and frequency of illicit psychotropic substance use would certainly have an impact on the prevalence of LUTS but this was not investigated in this survey.
 
Despite all these limitations, our study provides important data on the prevalence of LUTS in adolescents and the effect of psychotropic substance use. The LUTSs are prevalent in the general adolescent population. It is important for clinicians to obtain a history about psychotropic substance use when treating teenagers with LUTS as there is a substantial possibility that the LUTSs are caused by organic pathology associated with psychotropic substance use and not functional voiding disorders.
 
Appendix
Additional material related to this article can be found on the HKMJ website. Please go to <http://www.hkmj.org>, and search for the article.
 
Declaration
The study was supported by the Beat Drugs Fund (BDF101012) of the Hong Kong SAR Government. The funding source had no role in the study design, data collection, data analysis, results interpretation, writing of the manuscript, or the decision to submit the manuscript for publication. All authors have no conflicts of interest relevant to this article to disclose.
 
References
1. Irwin DE, Kopp ZS, Agatep B, Milsom I, Abrams P. Worldwide prevalence estimates of lower urinary tract symptoms, overactive bladder, urinary incontinence and bladder outlet obstruction. BJU Int 2011;108:1132-8. Crossref
2. Irwin DE, Milsom I, Hunskaar S, et al. Population-based survey of urinary incontinence, overactive bladder, and other lower urinary tract symptoms in five countries: results of the EPIC study. Eur Urol 2006;50:1306-14; discussion 1314-5. Crossref
3. Koff AS, Wagner TT, Jayanthi VR. The relationship among dysfunctional elimination syndromes, primary vesicoureteral reflux and urinary tract infections in children. J Urol 1998;160:1019-22. Crossref
4. Leonardo CR, Filgueiras MF, Vasconcelos MM, et al. Risk factors for renal scarring in children and adolescents with lower urinary tract dysfunction. Pediatr Nephrol 2007;22:1891-6. Crossref
5. Sureshkumar P, Jones M, Cumming R, Craig J. A population based study of 2,856 school-age children with urinary incontinence. J Urol 2009;181:808-15; discussion 815-6. Crossref
6. Nevéus T, von Gontard A, Hoebeke P, et al. The standardization of terminology of lower urinary tract function in children and adolescents: report from the Standardization Committee of the International Children’s Continence Society. J Urol 2006;176:314-24. Crossref
7. Yüksel S, Yurdakul AC, Zencir M, Cördük N. Evaluation of lower urinary tract dysfunction in Turkish primary schoolchildren: an epidemiological study. J Pediatr Urol 2014;10:1181-6. Crossref
8. Vaz GT, Vasconcelo MM, Oliveira EA, et al. Prevalence of lower urinary tract symptoms in school-age children. Pediatr Nephrol 2012;27:597-603. Crossref
9. Chung JM, Lee SD, Kang DI, et al. An epidemiologic study of voiding and bowel habits in Korean children: a nationwide multicenter study. Urology 2010;76:215-9. Crossref
10. Kyrklund K, Taskinen S, Rintala RJ, Pakarinen MP. Lower urinary tract symptoms from childhood to adulthood: a population based study of 594 Finnish individuals 4 to 26 years old. J Urol 2012;188:588-93. Crossref
11. Wood D, Cottrell A, Baker SC, et al. Recreational ketamine: from pleasure to pain. BJU Int 2011;107:1881-4. Crossref
12. Chu PS, Ma WK, Wong SC, et al. The destruction of the lower urinary tract by ketamine abuse: a new syndrome? BJU Int 2008;102:1616-22. Crossref
13. Chu PS, Kwok SC, Lam KM, et al. ‘Street ketamine’–associated bladder dysfunction: a report of ten cases. Hong Kong Med J 2007;13:311-3.
14. Central Registry of Drug Abuse Sixty-third Report 2004-2013. Narcotics Division, Security Bureau, The Government of the Hong Kong Special Administrative Region. Available from: http://www.nd.gov.hk/en/crda_63rd_report.htm. Accessed Dec 2015.
15. The 2011/12 survey of drug use among students. Narcotics Division, Security Bureau, The Government of the Hong Kong Special Administrative Region. Available from: http://www.nd.gov.hk/en/survey_of_drug_use_11-12.htm. Accessed Dec 2015.
16. Yee CH, Li JK, Lam HC, Chan ES, Hou SS, Ng CF. The prevalence of lower urinary tract symptoms in a Chinese population, and the correlation with uroflowmetry and disease perception. Int Urol Nephrol 2014;46:703-10. Crossref
17. Coyne KS, Sexton CC, Thompson CL, et al. The prevalence of lower urinary tract symptoms (LUTS) in the USA, the UK and Sweden: results from the Epidemiology of LUTS (EpiLUTS) study. BJU Int 2009;104:352-60. Crossref
18. Zhang L, Zhu L, Xu T, et al. A population-based survey of the prevalence, potential risk factors, and symptom-specific bother of lower urinary tract symptoms in adult Chinese women. Eur Urol 2015;68:97-112. Crossref
19. Wang Y, Hu H, Xu K, Wang X, Na Y, Kang X. Prevalence, risk factors and the bother of lower urinary tract symptoms in China: a population-based survey. Int Urogynecol J 2015;26:911-9. Crossref
20. Chung JM, Lee SD, Kang DI, et al. Prevalence and associated factors of overactive bladder in Korean children 5-13 years old: a nationwide multicenter study. Urology 2009;73:63-7; discussion 68-9. Crossref
21. Glassberg KI, Combs AJ, Horowitz M. Nonneurogenic voiding disorders in children and adolescents: clinical and videourodynamic findings in 4 specific conditions. J Urol 2010;184:2123-7. Crossref
22. Kajiwara M, Inoue K, Usui A, Kurihara M, Usui T. The micturition habits and prevalence of daytime urinary incontinence in Japanese primary school children. J Urol 2004;171:403-7. Crossref
23. Hellström A, Hanson E, Hansson S, Hjälmås K, Jodal U. Micturition habits and incontinence in 7-year-old Swedish school entrants. Eur J Pediatr 1990;149:434-7. Crossref
24. Akil IO, Ozmen D, Cetinkaya AC. Prevalence of urinary incontinence and lower urinary tract symptoms in school-age children. Urol J 2014;11:1602-8.
25. Tam YH, Ng CF, Pang KK, et al. One-stop clinic for ketamine-associated uropathy: report on service delivery model, patients’ characteristics and non-invasive investigations at baseline by a cross-sectional study in a prospective cohort of 318 teenagers and young adults. BJU Int 2014;114:754-60. Crossref
26. Winstock AR, Mitcheson L, Gillatt DA, Cottrell AM. The prevalence and natural history of urinary symptoms among recreational ketamine users. BJU Int 2012;110:1762-6. Crossref
27. Koo KC, Lee DH, Kim JH, et al. Prevalence and management of lower urinary tract symptoms in methamphetamine abusers: an under-recognized clinical identity. J Urol 2014;191:722-6. Crossref

Clinical transition for adolescents with developmental disabilities in Hong Kong: a pilot study

Hong Kong Med J 2016 Oct;22(5):445–53 | Epub 19 Aug 2016
DOI: 10.12809/hkmj154747
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Clinical transition for adolescents with developmental disabilities in Hong Kong: a pilot study
Tamis W Pin, PhD; Wayne LS Chan, PhD; CL Chan, BSc (Hons) Physiotherapy; KH Foo, BSc (Hons) Physiotherapy; Kevin HW Fung, BSc (Hons) Physiotherapy; LK Li, BSc (Hons) Physiotherapy; Tina CL Tsang, BSc (Hons) Physiotherapy
Department of Rehabilitation Sciences, Hong Kong Polytechnic University, Hunghom, Hong Kong
 
Corresponding author: Dr Tamis W Pin (tamis.pin@polyu.edu.hk)
 
This paper was presented as a poster at the Hong Kong Physiotherapy Association Conference 2015, Hong Kong on 3-4 November 2015.
 
 Full paper in PDF
 
Abstract
Introduction: Children with developmental disabilities usually move from the paediatric to adult health service after the age of 18 years. This clinical transition is fragmented in Hong Kong. There are no local data for adolescents with developmental disabilities and their families about the issues they face during the clinical transition. This pilot study aimed to explore and collect information from adolescents with developmental disabilities and their caregivers about their transition from paediatric to adult health care services in Hong Kong.
 
Methods: This exploratory survey was carried out in two special schools in Hong Kong. Convenient samples of adolescents with developmental disabilities and their parents were taken. The questionnaire was administered by interviewers in Cantonese. Descriptive statistics were used to analyse the answers to closed-ended questions. Responses to open-ended questions were summarised.
 
Results: In this study, 22 parents (mean age ± standard deviation: 49.9 ± 10.0 years) and 13 adolescents (19.6 ± 1.0 years) completed the face-to-face questionnaire. The main diagnoses of the adolescents were cerebral palsy (59%) and cognitive impairment (55%). Of the study parents, 77% were reluctant to transition. For the 10 families who did move to adult care, 60% of the parents were not satisfied with the services. The main reasons were reluctant to change and dissatisfaction with the adult medical service. The participants emphasised their need for a structured clinical transition service to support them during this challenging time.
 
Conclusions: This study is the first in Hong Kong to present preliminary data on adolescents with developmental disabilities and their families during transition from paediatric to adult medical care. Further studies are required to understand the needs of this population group during clinical transition.
 
 
New knowledge added by this study
  • These results are the first published findings on clinical transition for adolescents with developmental disabilities in Hong Kong.
  • Dissatisfaction with the adult health services and reluctance to change were the main barriers to clinical transition.
  • The concerns and needs of the families were similar regardless of whether adolescents had physical or cognitive disabilities.
Implications for clinical practice or policy
  • A structured clinical transition service is required for adolescents with developmental disabilities and their parents.
  • Further in-depth studies are required to examine the needs for and concerns about clinical transition for all those involved. This should include adolescents with developmental disabilities, their parents or caregivers, and service providers in both paediatric and adult health services.
 
 
Introduction
Advances in medical management now enable children with developmental disabilities (DD) who may previously have died to live well into adulthood.1 Such disabilities are defined as any condition that is present before the age of 22 years and due to physical or cognitive impairment or a combination of both that significantly affects self-care, receptive and expressive language, mobility, learning, independent living, or the economic independence of the individual.2 The transition from adolescence to adulthood is a critical period for all young people.3 In a clinical context, adult transition is “the purposeful, planned movement of adolescents and young adults with chronic physical and medical conditions from child-centered to adult-oriented health-care systems”.4 In 2001, a consensus statement with guidelines was endorsed to ensure adolescents with DD, who depend on coordinated health care services, make a smooth transition to the adult health care system in order to receive the services that they need in developed countries such as the United States.5
 
Researchers have identified needs and factors necessary for the successful transition of adolescents with DD.6 7 From the adolescent’s perspective, barriers to success include their dependence on others, reduced treatment time and access to specialists in the adult health service, lack of information about transition, and lack of involvement in the decision-making process. Parents of adolescents with DD were reluctant or confused about changing responsibilities during the transition period. The majority of challenges came from the service systems and included unclear eligibility criteria and procedures, limited time and lack of carer training, fragmented adult health service provision, lack of communication between service providers, and inaccessibility to resources including information.6 7
 
Based on the 2015 census of ‘Persons with disabilities and chronic diseases in Hong Kong’ from the Hong Kong Census and Statistics Department, there were 22 100 (3.8%) people with disability (excluding cognitive impairment) aged between 15 and 29 years, ie who were transitioning from the paediatric to adult health service.8 According to the Hospital Authority, Hong Kong, all public hospitals, and specialist and general out-patient clinics are organised into seven hospital clusters based on geographical location.9 The Duchess of Kent Children’s Hospital (DKCH) is a tertiary centre that provides specialised services for children with orthopaedic problems, spinal deformities, cerebral palsy and neurodevelopmental disorders, and neurological and degenerative diseases. Unlike overseas health care, there is no children’s hospital in Hong Kong that provides an acute health service. All paediatric patients go to the same hospital as adult patients but are triaged into the paediatric section for management by both in-patient and out-patient services. The specialised out-patient clinic list under each hospital cluster varies. Children with DD might receive services from general paediatrics clinic, a cerebral palsy clinic, Down’s clinic, behavioural clinic, or paediatric neurology out-patient clinic. Once a child reaches the age of 18 years, they are referred to the adult section of the same hospital for continued care. They will be followed up in neurology or movement disorder clinics, where other patients with adult-onset neurological conditions or movement disorders, such as stroke, Parkinson’s disease, multiple sclerosis are followed up. There is no separate specialised clinic for complex child-onset DD.10
 
Although adult transition for adolescents with DD has been recognised as a crucial area in health care overseas, it is an under-developed service in Hong Kong.11 A local study found that there is a service gap in adult transition for young people with chronic medical conditions, such as asthma, diabetes, and epilepsy. Training and education are urgently required for both service providers and young people with chronic health conditions and their families.11 It is unclear if the challenges and barriers identified in overseas literature6 7 are applicable to Hong Kong, where the paediatric and adult health services, especially the medical services, are located in the same building. At present, no study has been conducted with adolescents with DD and their families in Hong Kong about the issues they face during clinical transition. As a start, in this pilot study, we aimed to explore the acceptance of clinical transition and identify the main barriers to successful clinical transition for adolescents with DD and their caregivers in Hong Kong.
 
Methods
Participants
A survey study was conducted on a convenience sample of adolescents and/or their caregivers, who were recruited from two special high schools (Hong Kong Red Cross John F Kennedy Centre [JFK] and Haven of Hope Sunnyside School [HOH]) in Hong Kong. Students from JFK have primarily physical and multiple disabilities including cerebral palsy or muscular dystrophy and those from HOH have severe cognitive impairment. Both schools provide rehabilitation services on site including physiotherapy, occupational therapy, speech therapy, nursing support, and family support via the school social workers. The medical out-patient services for the students fall under different hospital clusters, depending on where the students’ families live. The parents are responsible for taking their adolescent children for medical review. As there was no previous study on which basis to calculate the required sample size and as a pilot study, we aimed to recruit 10 adolescents and their parents/caregivers in each school.
 
The inclusion criteria of the adolescents were: (1) aged 16 to 19 years and (2) a diagnosis of DD. All participating adolescents and/or their parents or legal guardians gave written informed consent before the survey. For adolescents with severe cognitive impairment, consent was sought from their parents as proxy and only their parents participated in the survey. This pilot study was approved by the Human Subjects Ethics Sub-committee of the Hong Kong Polytechnic University.
 
Survey
A specific questionnaire was developed for this pilot study to collect information relative to: (1) demographic characteristics of the participants; (2) whether or not the study participants were aware of the transition and the information source(s); (3) if the study participants were willing to transition to the adult health service and the underlying reasons; (4) for those who had transitioned, if they were satisfied with the adult health service and the underlying reasons; and (5) the opinion of the study participants of the clinical transition service. As a pioneer pilot study, health service included both medical and rehabilitation services and the study aimed to explore the general issues faced by this group of adolescents and their families during clinical transition. Lastly, as we predicted that the competency level of the adolescents and their caregivers in managing their disabilities might influence their perception of clinical transition, information about their self-rated competency level was also collected. Questions in the latter part were based on information from the Adolescent Transition Care Assessment Tool and were designed to assist health care professionals to provide a better transition for adolescents with chronic illness.12 The whole survey was administered by interviewers for the study adolescents and/or their caregivers separately in Cantonese. All interviews were recorded for data analyses.
 
The survey comprised closed- and open-ended questions. The closed-ended questions were designed to require a dichotomous answer, ie ‘yes’ or ‘no’, or an answer from a set of choices. For example, when asked about the sources of information about clinical transition, the study participants could choose their answers from a list of professionals such as paediatricians, social workers, physiotherapists, and school teachers. The open-ended questions focused on the reasons for an earlier response. For example, when asked if they wished to move to the adult health service, the individual would first answer ‘yes’ or ‘no’, then give their reasons. For the questions about self-perceived competency level, study participants read a number of statements (8 for the adolescents and 14 for parents) and indicated how much they agreed with the statement using a Likert-scale from ‘strongly disagree’ to ‘strongly agree’.
 
Data analyses
Descriptive statistics—including mean, median, standard deviation, or quartiles—were used to analyse the responses to the closed-ended questions. Discussion of the open-ended questions was transcribed and summarised by five team members (CLC, KHF, KHWF, LKL, and TCLT). The content was analysed and themes were identified independently by two other team members (TWP and WLSC). These themes were discussed and a consensus reached by all team members.
 
Results
Thirteen potential families were approached at the JFK via the physiotherapist of the school anticipating possible refusal by some. All the students and their parents agreed to participate, so all were included. Ten families were approached at the HOH via the social worker at the school but one parent declined the offer and no other family was interested in participating in the study. Since the students from HOH, who were cognitively impaired, were not able to be interviewed, only their parents/caregivers were interviewed. As a result, 22 parents (13 from the JFK and 9 from the HOH) and 13 adolescents (all from the JFK) were asked to complete the face-to-face survey. The demographic data of the participants are listed in Table 1. Cerebral palsy and cognitive impairment were the principal types of DD. All adolescents received rehabilitation from the Hospital Authority and/or from their special school. All the adolescents accessed between four and seven paediatric medical specialists (eg paediatrician, neurologist, orthopaedic surgeon) and rehabilitation services (eg physiotherapy, occupational therapy, speech therapy) indicating their complex needs. Over 90% of the JFK students were followed up at the DKCH that is within walking distance of the school (personal communication, Senior Physiotherapist at JFK).
 

Table 1. Demographic information of study participants
 
Table 2 summarises the participant responses about clinical transition. The majority of the parents (77%) and adolescents (85%) knew that clinical transition to adult care would occur at 18 years of age. They were mainly informed by their paediatrician (50% of parents and 69% of adolescents). Most parents (77%) were reluctant to make this move. Ten parents stated that their adolescent child was already receiving care from the adult sector and over half of them (60%) were dissatisfied with the service. Four of the 13 adolescents clearly stated that they had transitioned to the adult health service during the survey and all of them were happy with the transition. Among those 17 families who knew that clinical transition to adult care would occur at 18 years of age, 10 adolescents were all over 18 years old, whereas those in another seven (41%) families were also over 18 years old and still receiving services from the paediatric sector at the time of the study.
 

Table 2. Summary of responses to questions about clinical transition
 
When asked why they were not willing to the transition or why they were dissatisfied after the transition, the parent responses could be summed up as two main areas of concern: reluctance to change and dissatisfaction with the adult health services. Most parents (16/22, 73%) did not want to change their existing care circumstances. When asked why, some parents cited dissatisfaction with the adult health service or health system (for the latter, 13/22, 59% of parents). For example, parents found it difficult to attend the follow-up appointments using public transport. Although there was a free shuttle bus service for families who needed it, parents were frustrated by the limited service.
 
Some parents also wanted more flexible visiting hours in the adult hospital so that they could look after their adolescent children, especially those who were cognitively impaired. The parents worried about the quality of care for their children who were entirely dependent for their daily activities. They were also unhappy about the waiting time for medical appointments and stated that their children with DD had a short attention span and were unable to control their behaviour. Long waiting times in a crowded waiting area, which is commonly observed in the adult setting, could easily trigger their behavioural problems.
 
There was also dissatisfaction with the adult health service providers (13/22, 59% of parents). Parents often found that the adult medical staff demonstrated limited understanding and knowledge of their child’s clinical presentation and abilities, especially for those with severe cognitive impairment. The adult health service providers did not know how to communicate with the cognitively impaired adolescents and treated them as other normal adults.
 
There is no formal clinical transition service in Hong Kong but when asked, the majority of parents (21/22, 95%) and adolescents (11/13, 85%) stated that they would welcome such a service. About two thirds of the study parents and adolescents (23/35, 66%) would like the clinical transition service to support them during the clinical transition. About one third of the parents (7/22, 32%) believed that the service could act as a bridge linking the paediatric and adult health services, providing information about available services in the adult sector.
 
The Figure summarises the responses of adolescents for self-perceived competency in managing their disability, and Table 3 summarises the study parents’ responses. Most adolescents demonstrated understanding of instructions (11/13, 85%), confidence in communicating with the service providers about their condition (10/13, 77%), and understanding the importance of treatments for their condition (12/13, 92%) [Fig]. About half were confident in seeking help from different specialties according to their condition (6/13, 46%) and making medical decisions (7/13, 54%) [Fig]. Over half of the adolescents, however, lacked the confidence to attend routine medical visits on their own (8/13, 62%) and worried about the unfamiliar adult medical service (6/13, 46%) [Fig]. Most parents stated that they were familiar with their children’s medical conditions and treatments (20/22, 91%) and able to seek help from different medical specialties based on their child’s condition (14/22, 64%) [Table 3]. Only a minority of parents (1/22, 5%), however, believed that their children were capable of attending medical appointments on their own. Less than half of the parents believed that their children would be able to explain their medical condition (9/22, 41%) or make independent clinical decisions in the future (7/22, 32%). None of the parents of an adolescent with cognitive impairment believed that the child would ever be able to manage their own health.
 

Figure. Responses of study adolescents about their self-perceived competency in managing their disability
 

Table 3. Summary of responses of study parents about their self-perceived competency in managing the disability
 
Discussion
The present pilot study aimed to determine how adolescents with DD and their parents in Hong Kong accept the clinical transition and identify the main barriers to successful transition. This was the first step to understanding the issues of this population group during clinical transition and to enable planning for the future. As far as we know, this study is the first to be conducted in Hong Kong for this population group. Overall, 22 parents and 13 adolescents were recruited from two special schools (one for primarily physically disabled children and the other for severe physically and/or cognitively impaired individuals), aiming to understand what was the acceptance level and barriers faced by these two vastly different groups with DD. The results were very similar between these two subgroups, indicating that the study parents had similar issues during clinical transition, regardless of the type of DD of their child. Hence, the results from these two subgroups were discussed as one group.
 
Most of the study participants were aware of the clinical transition necessary at the age of 18 years. Only 10 (45%) of the 22 families shifted to the adult health service, despite the fact that their adolescent child was close to or over 18 years old (Tables 1 and 2). The reasons for this delay were not thoroughly explored but it has been suggested that medical practitioners in the paediatric service felt that the adolescents were not ready for the transition so they continued to see them well into adulthood while the parents and the adolescents were reluctant to make the move.11 The latter appeared to be true because when asked, most study participants did not want to change and move to the adult health service. This contradicts the results of a previous local study of adolescents with chronic medical conditions, in which over 80% of the study participants (adolescents and parents) were willing to move to the adult health service.11 The difference is likely due to the complexity of the health conditions of the present cohort. Adolescents with DD usually have varying degrees of physical and/or cognitive impairment and so depend more on others for managing their health condition, making them and their carers more anxious about any change.6 7 For those with chronic medical conditions, the physical and cognitive abilities of the adolescent were unlikely affected and hence the adolescents could manage their condition more independently and more readily after the transition.13 This speculation was supported by the findings about self-perceived competency level. Most study adolescents, who had mainly physical disabilities, were not confident about attending a medical appointment alone because of their limited physical abilities (Fig). The reluctance for change may also be due to fear of the unknown and of not being well prepared.11 In Hong Kong, clinical transition is non-structured and unplanned.11 Parents are often informed just before the transition, leading to poor preparation and confusion. Early and continuous clinical transition from early adolescence can enable parents and adolescents with DD to be prepared and actively participate in the transition planning.7 14 Although the clinical transition service is not well-known in Hong Kong, the study participants had a positive attitude towards a clinical transition service to help them navigate the process by bridging the paediatric and adult health services. In addition, the study parents wished to have more information about available adult health services, eg rehabilitation services, wheelchair maintenance services, etc. More information about the unknown has been shown to reduce the reluctance of parents and adolescents to change and to further improve their confidence about moving to adult care.5 6 15 16
 
Another barrier was dissatisfaction with the health care system and service providers in the adult setting (Table 2), and is in line with present literature.7 Some parents found it difficult to arrange transport to the adult hospital for follow-up while most of the appointments were currently at the special school. More accessible public transport might help, especially in Hong Kong, where private vehicles are not a common option. Flexible visiting hours in the adult hospital that would enable parents to care for their dependent adolescent child may also reduce their dissatisfaction. Longer waiting times for medical appointments in the adult setting was frequently mentioned by the study parents. In the adult sector, patients with all kinds of neurological conditions, both child-onset and adult-onset, are reviewed in the same clinic. The number of patients attending the clinic is vastly increased compared with the paediatric setting. In addition, patients with adult-onset neurological conditions and their family may not understand the characteristics of DD. Stressed behaviour of an adolescent child with DD may be perceived by other families as impatience. In the paediatric clinic, where all clinic attendees were children or adolescents with DD and their parents, the waiting time was shorter. Families as well as clinic staff had a full understanding of DD and would be more tolerant. It is likely that this lack of support in the adult setting further discouraged the study parents to make the transition willingly. Changes to the existing health care system, such as a separate clinic for child-onset DD conditions, may be a possible small step to assist this group in making a smooth transition. Education about clinical transition for staff in both the paediatric and adult settings allows them to prepare the families in advance. Education about paediatric conditions and communication with adolescents with DD can also equip adult staff with confidence so they develop a strong rapport with the families.16
 
Interestingly, from the perspective of the adolescents, the four adolescents who had transitioned stated that they were happy with the adult health service. Two (50%) adolescents indicated that because the ‘new’ doctors did not know them well, they paid more attention to them and one adolescent did not give any reason to support his statement. It is likely that in the paediatric sector, these adolescents had been followed up from early childhood by the same medical staff who virtually watched them grow up. A change of scene and people in the adult sector is welcomed by these adolescents. In addition, they might welcome the idea that they have started to actively participate in the consultation as a ‘patient’, unlike in the paediatric sector, where the consultation was directed at their parents, not them.14
 
Parents of adolescents who had physical and/or cognitive impairment had a similar perception of their adolescent child, that they would never be able to attend a medical appointment alone, presumably because of their disability (responses to questions 9 to 10 in Table 3). None of the parents of a cognitively impaired adolescent child believed their child to be capable of explaining their medical condition to others or making an independent medical decision. On the contrary, parents of a physically disabled child thought that while it may not apply at present, their child would be able to make their own decisions in future (responses to questions 11 to 14 in Table 3). Nonetheless, there was a discrepancy in this perception between the study adolescents and parents (Fig and Table 3). Most adolescents believed they could explain their condition to others (question 2 in the Fig) and over half believed that they could make an independent medical decision at the time of the study (question 7 in the Fig). Further studies are needed to determine whether this discrepancy is due to confusion on the part of the parents because of changing responsibilities during the transition.13 While western literature emphasises the importance of active participation by adolescents during clinical transition,14 it would be interesting to see if this can be endorsed in a parent-dominant Chinese society such as Hong Kong, where cultural differences may influence attitudes towards clinical transition.17
 
We were unable to analyse other challenges identified in overseas literature, such as unclear eligibility criteria and procedures and limited time for clinical transition, because comparable data were not available for Hong Kong. In other developed countries, adolescents are shifted with the assistance of a clinical transition service based in the children’s hospital (if any) or by applying clinical guidelines for best service.18 In Hong Kong, adolescents are referred from the paediatric section to the adult section within the same hospital. Each hospital cluster may have different procedures for this ‘transition’ and there is no defined department within the hospital structure to assist adolescents and their families through this process. Nor do the families receive any advice about how to negotiate this process. A future in-depth study is recommended to understand the existing situation of clinical transition in different hospital clusters and determine how to establish a more formal approach among all the hospital clusters in Hong Kong to support this group of adolescents and their families during this confusing time.
 
Limitations of the present study
There may have been a selection bias in the study sample as the families were approached by convenience through the school staff. Due to the small sample size, no statistical analysis was conducted to compare the subgroups of adolescents with physical and cognitive impairments. Although the sample size was small, the purpose of the present pilot study was not to generalise the findings to all adolescents with DD but to begin to understand the acceptance of clinical transition and the main barriers to success for adolescents with DD and their family in Hong Kong. The present results are also in line with the literature in this area.4 7 11 14 17 Future studies with a larger sample size and more in-depth qualitative data are required to verify the present results. The potential subjective bias of the results, especially for the open-ended questions, was another limitation but we attempted to minimise this through consensus agreement among the team.
 
Conclusions
In the present explorative study, close to half of the study families had a delayed clinical transition to the adult health service. Most study parents were reluctant for their adolescent children to shift to the adult health service due to unwillingness to change and dissatisfaction with the adult medical service. A structured and well-planned clinical transition was urged by the study participants to bridge the paediatric and adult health services and to provide support to the family. Further studies are required to analyse the needs and concerns of adolescents with DD and their families as well as the service providers in the adult medical setting to facilitate the future development of a clinical transition service in Hong Kong.
 
Acknowledgements
The authors would like to thank all the participating families from the Hong Kong Red Cross John F Kennedy Centre and Haven of Hope Sunnyside School.
 
Declaration
All authors have disclosed no conflicts of interest.
 
References
1. Westbom L, Bergstrand L, Wagner P, Nordmark E. Survival at 19 years of age in a total population of children and young people with cerebral palsy. Dev Med Child Neurol 2011;53:808-14. Crossref
2. Public Law 98-527, Developmental Disabilities Act of 1984.
3. Staff J, Mortimer JT. Diverse transitions from school to work. Work Occup 2003;30:361-9. Crossref
4. Blum RW, Garell D, Hodgman CH, et al. Transition from child-centered to adult health-care systems for adolescents with chronic conditions. A position paper of the Society for Adolescent Medicine. J Adolesc Health 1993;14:570-6. Crossref
5. American Academy of Pediatrics, American Academy of Family Physicians, American College of Physicians-American Society of Internal Medicine. A consensus statement on health care transitions for young adults with special health care needs. Pediatrics 2002;110(6 Pt 2):1304-6.
6. Bindels-de Heus KG, van Staa A, van Vliet I, Ewals FV, Hilberink SR. Transferring young people with profound intellectual and multiple disabilities from pediatric to adult medical care: parents’ experiences and recommendations. Intellect Dev Disabil 2013;51:176-89. Crossref
7. Stewart D, Stavness C, King G, Antle B, Law M. A critical appraisal of literature reviews about the transition to adulthood for youth with disabilities. Phys Occup Ther Pediatr 2006;26:5-24. Crossref
8. Persons with disabilities and chronic diseases in Hong Kong. Hong Kong: Hong Kong Census and Statistics Department; 2016. Available from: http://www.statistics.gov.hk/pub/B71501FB2015XXXXB0100.pdf. Accessed Jul 2016.
9. Clusters, hospitals & institutions. Hospital Authority. 2016. Available from: http://www.ha.org.hk/visitor/ha_visitor_index.asp?Content_ID=10036&Lang=ENG&Dimension=100&Parent_ID=10004. Accessed Jul 2016.
10. Hospital Authority Statistical Report 2012-2013. Hong Kong: Hospital Authority; 2013.
11. Wong LH, Chan FW, Wong FY, et al. Transition care for adolescents and families with chronic illnesses. J Adolesc Health 2010;47:540-6. Crossref
12. Hong Kong Society for Adolescent Health. Adolescent Transition Care Assessment Tool, Public Education Series No. 12 (2013). Available from: http://hksah.blogspot.hk/2013/11/adolescent-transition-care-assessment.html. Accessed 6 Feb 2016.
13. Stewart DA, Law MC, Rosenbaum P, Willms DG. A qualitative study of the transition to adulthood for youth with physical disabilities. Phys Occup Ther Pediatr 2002;21:3-21. Crossref
14. Viner RM. Transition of care from paediatric to adult services: one part of improved health services for adolescents. Arch Dis Child 2008;93:160-3. Crossref
15. Blum RW. Introduction. Improving transition for adolescents with special health care needs from pediatric to adult-centered health care. Pediatrics 2002;110(6 Pt 2):1301-3.
16. Stewart D. Transition to adult services for young people with disabilities: current evidence to guide future research. Dev Med Child Neurol 2009;51 Suppl 4:169-73. Crossref
17. Barnhart RC. Aging adult children with developmental disabilities and their families: challenges for occupational therapists and physical therapists. Phys Occup Ther Pediatr 2001;21:69-81. Crossref
18. Department of Health. National Service Framework for Children, Young People and Maternity Services. Transition: getting it right for young people. Improving the transition of young people with long term conditions from children’s to adult health services. 2006. Available from: http://dera.ioe.ac.uk/8742/1/DH_4132145%3FIdcService%3DGET_FILE%26dID%3D23915%26Rendition%3DWeb. Accessed Jul 2016.

Can surgical need in patients with Naja atra (Taiwan or Chinese cobra) envenomation be predicted in the emergency department?

Hong Kong Med J 2016 Oct;22(5):435–44 | Epub 12 Aug 2016
DOI: 10.12809/hkmj154739
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Can surgical need in patients with Naja atra (Taiwan or Chinese cobra) envenomation be predicted in the emergency department?
HY Su, MD1; MJ Wang, PhD2; YH Li, PhD3; CN Tang, MD4; MJ Tsai, MD, PhD5
1 Department of Emergency Medicine, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan; Department of Emergency Medicine, Buddhist Tzu Chi General Hospital, Hualien, Taiwan
2 Department of Medical Research, Buddhist Tzu Chi General Hospital, Hualien, Taiwan
3 Department of Public Health, Tzu Chi University, Hualien, Taiwan
4 Department of Family Medicine, Buddhist Tzu Chi General Hospital, Hualien, Taiwan
5 Department of Emergency Medicine, Ditmanson Medical Foundation Chiayi Christian Hospital, Chiayi, Taiwan; Department of Sports Management, Chia Nan University of Pharmacy and Science, Tainan, Taiwan
 
Corresponding author: Dr MJ Tsai (tshi33@gmail.com)
 
An earlier version of this paper was presented at the 7th Asian Conference on Emergency Medicine held in Tokyo, Japan on 23-25 October 2013.
 
 Full paper in PDF
 
Abstract
Objectives: To investigate the clinical predictors and the aetiologies for surgery in patients with Naja atra (Taiwan or Chinese cobra) envenomation.
 
Methods: This case series was conducted in the only tertiary care centre in eastern Taiwan. Patients who presented to the emergency department with Naja atra bite between January 2008 and September 2014 were included. Clinical information was collected and compared between surgical and non-surgical patients.
 
Results: A total of 28 patients with Naja atra envenomation presented to the emergency department during the study period. Of these, 60.7% (n=17) required surgery. Necrotising fasciitis (76.5%) was the main finding in surgery. Comparisons between surgical and non-surgical patients showed skin ecchymosis (odds ratio=34.36; 95% confidence interval, 2.20-536.08; P=0.012) and a high total dose of antivenin (≥6 vials; odds ratio=14.59; 95% confidence interval, 1.10-192.72; P=0.042) to be the most significant predictors of surgery. The rate of bacterial isolation from the surgical wound was 88.2%. Morganella morganii (76.5%), Enterococcus faecalis (58.8%), and Bacteroides fragilis (29.4%) were the most common pathogens involved. Bacterial susceptibility testing indicated that combined broad-spectrum antibiotics were needed to cover mixed aerobic and anaerobic bacterial infection.
 
Conclusions: Patients with Naja atra envenomation who present with skin ecchymosis or the need for a high dose of antivenin may require early surgical assessment. Combined broad-spectrum antibiotics are mandatory.
 
 
New knowledge added by this study
  • Among the six major venomous snakebites in Taiwan, Naja atra envenomation most commonly leads to surgical intervention.
  • Ecchymosis on the bite wound may be a good indicator for surgical need in N atra envenomation.
  • Adequate antibiotic treatment may play an important role in the early management of N atra envenomation.
Implications for clinical practice or policy
  • Surgical debridement and broad-spectrum antibiotic treatment are suggested in patients with N atra envenomation who develop ecchymosis. Surgery is more likely when high-dose antivenin has been used.
 
 
Introduction
Snakebites are an important public health and wilderness medical issue in Taiwan. Because of the warm and humid climate in Taiwan, there are more than 40 terrestrial snake species, of which 15 are venomous. Six of the venomous species are of high clinical importance, including Protobothrops mucrosquamatus (Taiwan Habu), Trimeresurus stejnegeri (Taiwan bamboo viper), Naja atra (Taiwan or Chinese cobra), Bungarus multicinctus (banded krait), Deinagkistrodon acutus (hundred pacer), and Daboia russelii siamensis (Russell’s viper).1 2
 
Naja atra belongs to the Elapidae family, and in addition to Taiwan, it inhabits southern China, Hong Kong, northern Laos, and northern Vietnam.3 Cobra venom contains a mixture of components, including cardiotoxin, cobrotoxin, haemotoxin, and phospholipase A2.4 Patients envenomed by a cobra experience varying degrees of neurotoxicity and cytotoxicity depending upon the proportions of the venom components. Due to evolution and geographical variations, different cobra species cause distinct clinical effects. For example, Naja philippinensis (northern Philippine cobra) causes a purely neurotoxic effect without local cytotoxicity.5 In contrast, N atra envenomation is associated with more cytotoxic effects.3 6 7 Although an equine-derived bivalent F(ab)2 antivenin has been produced by the Centers for Disease Control, ROC (Taiwan) to neutralise the venom of N atra, the surgical intervention rate remains high.1 8 The main objective of this study was to investigate the clinical presentations and predictors for surgery in patients with N atra envenomation. Due to high wound infection rates, the isolated bacteria from surgical wounds and the antimicrobial susceptibility were also analysed.
 
Methods
Study design and patient population
The Buddhist Tzu Chi General Hospital is the only tertiary care centre in eastern Taiwan. There are 1000 beds and the emergency department (ED) has more than 55 000 patient visits per year. This hospital is also the toxicant, drug information, and antidote control centre for eastern Taiwan. A retrospective study was conducted to analyse data from patients admitted to the ED with N atra envenomation between 1 January 2008 and 30 September 2014.
 
Data collection, processing, and categorisation
A medical assistant was responsible for collecting the medical records of patients admitted with snakebite during the study period by using the computerised chart system and International Classification of Diseases, 9th Revision, Clinical Modification codes 989.5, E905.0, E905.9, E906.2, and E906.5. Two physicians (the first and fifth authors) independently reviewed the charts and categorised these patients as having venomous or non-venomous snakebites based on the patient’s presentation with or without toxic effects. For venomous snakebites, classification of the snake species was based on the identification of the snake brought in by the patient or identification by the patient from a picture. All the included patients had a compatible presentation and consistent antivenin use as recorded in the patient chart. Patients who were initially recognised as having venomous snakebites but did not receive antivenin treatment were excluded from the study because of the high probability of a dry bite or misidentification of the snake species. Patients with a toxic presentation who could not identify the snake species or who received more than one type of antivenin were recorded as having an unknown poisonous snakebite.
 
Here we only report patients who were bitten by N atra. To identify the early clinical predictors of surgery, we categorised the patients into surgical and non-surgical groups. All surgical interventions were performed after surgical consultation in the ED or after admission when patients presented with progressive signs suggesting tissue necrosis, necrotising fasciitis, or suspected compartment syndrome. The final diagnoses of necrotising fasciitis and compartment syndrome were made according to surgical pathological findings and intracompartmental pressure measurement, respectively. The surgical procedures included debridement, fasciotomy, skin graft, and digit or limb amputation. The potential clinical predictors of surgery in N atra envenomation included the patient’s age, gender, season of snakebite, co-morbidities, details of envenomation, site of snakebite, initial vital signs on arriving at the ED, clinical presentation, laboratory data, treatment, timing of initial antivenin therapy, and total dose of antivenin.
 
For the laboratory analyses, the initial data obtained in the ED were collected, including haematology, biochemistry, and coagulation profiles. In regard to clinical presentation, the local signs and symptoms, local complications, and systemic manifestations and complications were classified. Local signs and symptoms included swelling, ecchymosis, necrosis, numbness, and bulla formation. Local complications included necrotising fasciitis and suspected compartment syndrome. Systemic manifestations and complications included neurological symptoms, including ptosis, blurred vision, drooling, and paralysis of facial, limb, or respiratory muscles; leukocytosis, defined as a white blood cell count of >11.0 x 109 /L; thrombocytopenia, defined as a platelet count of <150 x 103 /mm3;2 prothrombin time (PT) prolongation, defined as PT of >11.6 seconds; activated partial thromboplastin time (aPTT) prolongation, defined as aPTT of >34.9 seconds (prolonged PT and aPTT were defined according to our clinical laboratory reference range); fibrinogen consumption, defined as a fibrinogen level of <1.8 g/L; elevated D-dimer level, defined as a D-dimer level of >500 µg/L; acute renal impairment, defined as a creatinine level of >123.8 µmol/L9; and rhabdomyolysis, defined as a creatine kinase level of >1000 U/L.10 Two physicians reviewed the charts of the enrolled patients and rechecked the accuracy of the data collection. If the patient’s initial vital signs were not measured or laboratory tests were not performed in the ED, this was recorded as a missing value in the database. Any discrepancy regarding the collected data was resolved through discussion with the third physician on the research team. The study protocol was approved by the institutional review board of the Buddhist Tzu Chi General Hospital (IRB102-38). All patient records and information were anonymised and de-identified prior to analysis.
 
Statistical analyses
To identify significant early clinical presentation and laboratory data associated with surgery in patients with N atra envenomation, the Student’s t test or the Mann-Whitney U test for continuous variables and Chi squared test for categorical variables were used to perform univariate analysis. A P value of <0.05 was considered statistically significant, and all statistical tests were two-tailed. For multivariate analysis, the categorical variables with a P value of <0.05 in the initial univariate analysis were selected and entered into a logistic regression forward stepwise Wald test to calculate the odds ratios (ORs). The Statistical Package for the Social Sciences (Windows version 12.0; SPSS Inc, Chicago [IL], US) was used to perform the statistical analyses.
 
Results
Epidemiology and surgical intervention rate for snake envenomation
Between 1 January 2008 and 30 September 2014, a total of 245 patients with venomous snakebites were recorded. Among these, 64 (26.1%) patients had P mucrosquamatus envenomation, 56 (22.9%) had T stejnegeri envenomation, 28 (11.4%) had N atra envenomation, five (2.0%) had B multicinctus envenomation, six (2.4%) had D acutus envenomation, seven (2.9%) had D r siamensis envenomation, and 79 (32.2%) had unknown poisonous snake envenomation.
 
The snakebites associated with the highest surgical intervention rates were N atra (60.7%), followed by D acutus (33.3%), and P mucrosquamatus (12.5%).
 
Characteristics and clinical status of patients with Naja atra envenomation
Of the 28 patients with a N atra bite, 20 (71.4%) were male. The mean (± standard deviation) age of patients was 52.3 ± 3.2 years. Of the patients, 22 (78.6%) were bitten in the summer or fall; 17 (60.7%) were bitten on an upper limb; and 17 (60.7%) with N atra envenomation received surgical treatment. These patients had a significantly longer duration of hospitalisation than non-surgical patients (27.5 ± 10.2 days vs 2.7 ± 3.1 days; P<0.001). The main operative diagnosis was necrotising fasciitis (n=13, 76.5%) with confirmation by histopathology. The clinical characteristics of the 17 surgical patients are shown in Table 1. The mean duration from the time of initial presentation to the day of surgery was 5.5 ± 4.3 days. All 13 patients with necrotising fasciitis underwent emergency fasciotomy and debridement, and two required limb or digit amputation. The other four surgical patients without necrotising fasciitis only received local debridement with or without skin graft due to local tissue necrosis. Therefore, a smaller surgical wound and a shorter duration of hospitalisation were observed for these patients (Table 1). Nearly all surgical patients presented with local swelling and ecchymosis on the bite wound. Only one non-surgical patient presented with ecchymosis on a finger and was discharged from the ED 1 day later after four vials of antivenin were administered. The Figure shows the initial ecchymosis and necrosis of a N atra bite wound, the development of extensive tissue necrosis, and the postoperative wounds of a surgical patient (patient No. 9 in Table 1).
 

Table 1. Clinical characteristics of the 17 surgical patients with Naja atra envenomation
 

Figure. Patient No. 9 in Table 1
A 59-year-old man bitten by Naja atra on his left foot visited our hospital 6 hours after the snakebite. (a) Despite the use of 10 vials of antivenin, progressive ecchymosis and necrosis on the bite wound developed later. (b) Fasciotomy and debridement were done on the second day of patient visit. (c) Progressive wound necrosis and necrotising fasciitis of the leg developed 5 days later. (d and e) He underwent second surgical debridement of the foot and fasciotomy of the leg
 
Demographic and clinical characteristics associated with surgical treatment in patients with Naja atra envenomation
The demographic and clinical characteristics were compared between the surgical and non-surgical patients with N atra envenomation (Tables 2 and 3). Overall, the surgical patients received significantly higher doses of antivenin (9.2 ± 4.9 vials vs 3.8 ± 2.4 vials; P=0.002) and had significantly higher white blood cell counts (11.0 ± 3.7 x 109 /L vs 8.2 ± 2.4 x 109 /L; P=0.043). A higher respiratory rate was also evident in surgical patients (median [interquartile range]: 20 [20-21] vs 18 [16-18] breaths/min; P=0.015), but the incidence of missing data in both groups for this factor was high (Table 2). A significantly higher proportion of surgical patients received six or more vials of antivenin in total compared with non-surgical patients (82.4% vs 18.2%; P=0.001) [Table 3]. For local signs, symptoms and complications, a significantly higher proportion of surgical patients presented with local swelling (100% vs 72.7%; P=0.05), ecchymosis (82.4% vs 9.1%; P<0.001), necrosis (58.8% vs 0%, P=0.002), bulla formation (41.2% vs 0%; P=0.023), and necrotising fasciitis (76.5% vs 0%; P<0.001) [Table 3]. Age, season and site of snakebite, co-morbidity with diabetes, allergy to antivenin, and other systemic manifestations were not found to be significantly different between surgical and non-surgical patients. None of the patients with N atra envenomation presented with neurological symptoms. One patient with a small area of ecchymosis on the bite wound of his left hand did not receive surgical intervention, because the condition of the local wound improved and healed after administration of four vials of antivenin and intravenous antibiotics.
 

Table 2. Clinical and laboratory characteristics of 28 patients with Naja atra envenomation
 

Table 3. Demographics, and clinical and laboratory characteristics of 28 patients with Naja atra envenomation
 
Independent predictors of surgery in patients with Naja atra envenomation
To determine clinical predictors of surgery, a multivariate logistic regression analysis was conducted for the significant variables derived from the univariate analysis. Necrotising fasciitis was not included in the multivariate analysis because it was a surgical finding and not an early sign that could be identified in the ED. The results showed that local ecchymosis (OR=34.36; 95% confidence interval [CI], 2.20-536.08; P=0.012) and a high total dose of antivenin (≥6 vials; OR=14.59; 95% CI, 1.10-192.72; P=0.042) were the most significant clinical predictors of surgery in patients with N atra envenomation.
 
Bacterial isolates identified from the snakebite wounds of surgical patients with Naja atra envenomation, and bacterial susceptibility to common antibiotics
To analyse the cause of necrotising fasciitis in surgical patients, the bacterial isolates identified from snakebite wounds were further analysed in surgical patients. The positive culture rate was 88.2% (n=15). More than one type of bacteria were isolated from the snakebite wound in 14 (82.4%) surgical patients. The isolated pathogens included aerobic Gram-positive and Gram-negative bacteria, as well as anaerobic bacteria. The most commonly identified pathogen was Morganella morganii (76.5%), followed by Enterococcus faecalis (58.8%) and Bacteroides fragilis (29.4%) [Table 4].
 

Table 4. Bacterial isolates from snakebite wounds of surgical patients
 
The susceptibility of the bacteria to common antibiotics was analysed (Table 5). All Gram-positive bacteria were susceptible to vancomycin and teicoplanin. All Gram-negative bacteria were susceptible to cefotaxime and amikacin. Cefmetazole, gentamicin, levofloxacin, and trimethoprim/sulfamethoxazole were also effective against the isolated Gram-negative bacteria. Nearly all anaerobic bacteria were susceptible to clindamycin and metronidazole (Table 5).
 

Table 5. Susceptibility of bacteria isolated from snakebite wounds to common antibiotics
 
Discussion
In our study, skin change of ecchymosis on the bite wound was a good clinical predictor of surgery for N atra envenomation. The majority of N atra venom is cytotoxic, not haemorrhagic. The cardiotoxin and phospholipase A2 in N atra venom are direct cytotoxic polypeptides and cause degradation of cell membranes. They induce cell death by activating calcium-dependent proteases, and inhibit mitochondrial respiration. Hyaluronidase in N atra venom destroys interstitial constituents and precipitates the spreading of venom.11 A histopathological study of N atra bite wounds demonstrated thrombotic and fibrinoid deposits in superficial and deep dermal vessels, and leukocytoclastic vasculitis.12 Hence, both the cytotoxic and ischaemic effects of N atra venom may lead to blood extravasation from the destroyed subcutaneous vessels or capillaries and result in the characteristic ecchymosis on the bite wound. This finding may be a potentially important clinical sign of irreversible subcutaneous tissue necrosis due to development of tissue ischaemia.3 If management at this stage is inadequate, tissue destruction may progress to involve the fascia rapidly and extensively with ultimate development of necrotising fasciitis.13 In our patients, extensive tissue destruction beyond the original bite site was evident once necrotising fasciitis developed. Further study is required to verify whether early surgical intervention can prevent the development of necrotising fasciitis, reduce the size of surgical wound, or shorten the length of hospital stay. Nonetheless, surgical assessment may be needed in patients with N atra bite who present with local ecchymosis on the bite wound.
 
Traditionally, immediate injection of antivenin to neutralise N atra venom was the only efficient management.14 A study using an enzyme-linked immunosorbent assay to detect the amount of N atra venom revealed that two to eight vials of antivenin are sufficient to eliminate systemic circulating venom if presentation is early.6 The efficacy of systemically administrated antivenin to diminish local tissue destruction is still controversial, however, and needs further study.3 In an animal study, the cytotoxic venom of N atra was shown to bind with high affinity to tissues leading to high levels of local tissue destruction.15 This finding may explain the difficulties associated with neutralisation of local venom toxicity, especially in cases of delayed presentation. Thus, the adequate dose of antivenin for preventing advanced tissue destruction remains unknown. In our study, nearly all patients presented within 1 hour following envenoming. Intravenous injection of antivenin was administered as soon as clinically possible following identification of cobra envenoming. Interestingly, the use of higher doses of antivenin in patients with N atra envenomation did not decrease surgical rates even in cases of early presentation. More than half of the patients underwent surgery and the majority were diagnosed with necrotising fasciitis. Surgical intervention appears to be crucial for the management of N atra envenomation. Hence, the identification of clinical predictors of surgical need and sufficient evidence to support surgeons’ decisions to carry out early surgical intervention are important issues in N atra management.
 
High bacterial isolation rates and the growth of mixed spectrums of bacteria from bite wounds indicate bacterial infection (which may be another cause of necrotising fasciitis in N atra envenomation), bacterial colonisation, or both. Morganella morganii and Enterococcus species were the most common pathogens cultured from N atra bite wounds in this study. This finding is consistent with the bacterial cultures taken from oral swabs of N atra in Hong Kong.16 Similar results were also described in a previous study in western Taiwan.17 Hence, the use of adequate antibiotics is important in N atra envenomation management. In accordance with the results of our tests of the antibiotic susceptibility of the isolated bacteria, treatment with glycopeptide antibiotics (vancomycin or teicoplanin) combined with a third-generation cephalosporin (cefotaxime) with or without anti-anaerobic antibiotics (clindamycin or metronidazole) is recommended.
 
Limitations
There are several limitations in our study. First, this was a retrospective chart review comparative study. Non-uniform description of symptoms and signs documented by different providers may have influenced the validity of the statistics. Second, the small sample size may limit the statistical power in the multivariate analysis. Third, there are no definitive guidelines for the management of venomous snakebites in Taiwan, and various treatment strategies were employed; this may have influenced the final outcome. A large-scale prospective study is warranted to verify the risk factors we have identified to provide more accurate data for early risk stratification, treatment, and management of these patients.
 
Conclusions
Of the six common venomous snakes in eastern Taiwan, bites by N atra most frequently lead to surgical intervention. Severe tissue necrosis and necrotising fasciitis were the main findings during surgery. Patients who present with ecchymosis on the bite wound or who require higher doses of antivenin may have a higher probability of surgical intervention. In addition to early and adequate antivenin treatment, combined broad-spectrum antibiotics and surgical intervention may be needed in the management of N atra snakebites.
 
Acknowledgement
This work was supported by Buddhist Tzu Chi General Hospital Grants TCRD103-53 (to the first author).
 
Declaration
All authors have disclosed no conflicts of interest.
 
References
1. Liau MY, Huang RJ. Toxoids and antivenoms of venomous snakes in Taiwan. Toxin Rev 1997;16:163-75.
2. Hung DZ. Taiwan’s venomous snakebite: epidemiological, evolution and geographic differences. Trans R Soc Trop Med Hyg 2004;98:96-101. Crossref
3. Wong OF, Lam TS, Fung HT, Choy CH. Five-year experience with Chinese cobra (naja atra)–related injuries in two acute hospitals in Hong Kong. Hong Kong Med J 2010;16:36-43.
4. Li S, Wang J, Zhang X, et al. Proteomic characterization of two snake venoms: Naja naja atra and Agkistrodon halys. Biochem J 2004;384:119-27. Crossref
5. Watt G, Padre L, Tuazon L, Theakston RD, Laughlin L. Bites by the Philippine cobra (Naja naja philippinensis): prominent neurotoxicity with minimal local signs. Am J Trop Med Hyg 1988;39:306-11.
6. Hung DZ, Liau MY, Lin-Shiau SY. The clinical significance of venom detection in patients of cobra snakebite. Toxicon 2003;41:409-15. Crossref
7. Wang W, Chen QF, Yin RX, et al. Clinical features and treatment experience: A review of 292 Chinese cobra snakebites. Environ Toxicol Pharmacol 2014;37:648-55. Crossref
8. Huang LW, Wang JD, Huang JA, Hu SY, Wang LM, Tsan YT. Wound infections secondary to snakebite in central Taiwan. J Venom Anim Toxins Incl Trop Dis 2012;18:272-6. Crossref
9. Hung DZ, Wu ML, Deng JF, Lin-Shiau SY. Russell’s viper snakebite in Taiwan: differences from other Asian countries. Toxicon 2002;40:1291-8. Crossref
10. Chen YW, Chen MH, Chen YC, et al. Differences in clinical profiles of patients with Protobothrops mucrosquamatus and Viridovipera stejnegeri envenoming in Taiwan. Am J Trop Med Hyg 2009;80:28-32.
11. Harris JB. Myotoxic phospholipases A2 and the regeneration of skeletal muscles. Toxicon 2003;42:933-45. Crossref
12. Pongprasit P, Mitrakul C, Noppakun N. Histopathology and microbiological study of cobra bite wounds. J Med Assoc Thai 1988;71:475-80.
13. Gozal D, Ziser A, Shupak A, Ariel A, Melamed Y. Necrotizing fasciitis. Arch Surg 1986;121:233-5. Crossref
14. Russell FE. Snake venom immunology: historical and practical considerations. Toxin Rev 1988;7:1-82. Crossref
15. Guo MP, Wang QC, Liu GF. Pharmacokinetics of cytotoxin from Chinese cobra (Naja naja atra) venom. Toxicon 1993;31:339-43. Crossref
16. Lam KK, Crow P, Ng KH, et al. A cross-sectional survey of snake oral bacterial flora from Hong Kong, SAR, China. Emerg Med J 2011;28:107-14. Crossref
17. Chen CM, Wu KG, Chen CJ, Wang CM. Bacterial infection in association with snakebite: a 10-year experience in a northern Taiwan medical center. J Microbiol Immunol Infect 2011;44:456-60. Crossref

Multimodal analgesia model to achieve low postoperative opioid requirement following bariatric surgery

Hong Kong Med J 2016 Oct;22(5):428–34 | Epub 15 Jul 2016
DOI: 10.12809/hkmj154769
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Multimodal analgesia model to achieve low postoperative opioid requirement following bariatric surgery
Katherine KY Lam, FHKCA, FHKAM (Anaesthesiology); Wilfred LM Mui, FCSHK, FHKAM (Surgery)
Hong Kong Bariatric and Metabolic Institute and Evangel Hospital Weight Management Centre, Room 610, Champion Building, 301-309 Nathan Road, Jordan, Hong Kong
 
Corresponding author: Dr Katherine KY Lam (katherinelamky@gmail.com)
 
 Full paper in PDF
Abstract
Objective: To investigate whether a new anaesthesia protocol can reduce opioid use in obese patients following laparoscopic sleeve gastrectomy.
 
Methods: This prospective observational case series was conducted in a private hospital in Hong Kong that has been accredited as a Centre of Excellence for Bariatric Surgery. Thirty consecutive patients scheduled for laparoscopic sleeve gastrectomy from 1 January 2015 to 31 March 2015 were reviewed.
 
Results: Of the 30 patients, 14 (46.7%) did not require any opioids for rescue analgesia during the entire postoperative period; six (20.0%) required rescue opioids only in the post-anaesthetic care unit, but not in the surgical ward. The mean postoperative total opioid requirement per patient was 32 mg of pethidine.
 
Conclusion: With combination of multimodal analgesia with local anaesthetic infiltration, it is possible to avoid giving potent long-acting opioids in anaesthesia for bariatric surgery.
 
New knowledge added by this study
  • It is possible to avoid giving potent long-acting opioids in anaesthesia for bariatric surgery, by using multimodal analgesia with a combination of paracetamol, pregabalin, COX-2 inhibitors, tramadol, ketamine, dexmedetomidine, and local anaesthetic wound infiltration.
Implications for clinical practice or policy
  • The use of this opioid-sparing anaesthetic technique can potentially reduce the adverse effects and morbidity associated with the use of opioids in obese patients. The technique can be extended to other types of surgery in obese patients.
 
 
Introduction
Obese patients are particularly sensitive to the sedative and respiratory depressive effects of long-acting opioids. Many obese patients also have obstructive sleep apnoea syndrome (OSAS) and will be prone to airway obstruction and desaturation in the postoperative period, especially if opioids have been given.1 2 Given this background, multimodal analgesia is advocated for bariatric surgery with the aim of reducing opioid use.3 4 At the time of writing, no studies were able to demonstrate a technique that can consistently remove the need for any postoperative opioid analgesia. In this study, we report the use of an anaesthesia protocol that allowed a significant proportion of our patients undergoing laparoscopic sleeve gastrectomy to be completely free from any long-acting potent opioids in the intra-operative and postoperative period.
 
Methods
Patient selection
This was a prospective observational study. The study was conducted in a private hospital in Hong Kong that has been accredited as a Centre of Excellence for Bariatric Surgery. All patients scheduled for laparoscopic sleeve gastrectomy for management of obesity or type 2 diabetes from 1 January 2015 onwards were anaesthetised using the same protocol. We analysed 30 consecutive cases between 1 January 2015 and 31 March 2015 to investigate the postoperative opioid requirements using this anaesthesia protocol. Patients were excluded from the case series if they had contra-indications or allergy to any of the anaesthetic or analgesic drugs, or if anaesthesia deviated from the standard protocol for any reason. Three patients were excluded—one was taking serotonin-specific reuptake inhibitor antidepressants and pethidine was avoided to prevent serotonin syndrome (morphine given instead); one was allergic to non-steroidal anti-inflammatory drugs (NSAIDs), so intravenous parecoxib and oral etoricoxib were not given; one accidentally had a larger dose of ketamine given intra-operatively than allowed by the protocol. Concomitant laparoscopic cholecystectomy was performed with laparoscopic sleeve gastrectomy in three patients who were included in the study.
 
The anaesthesia protocol
All patients were fasted from midnight on the night before surgery. All operations were scheduled in the morning. Patients were premedicated with oral pantoprazole 40 mg on the night before surgery, and 2 g of oral paracetamol and 150 mg or 300 mg of oral pregabalin (for patients of body mass index <35 kg/m2 or ≥35 kg/m2, respectively) 2 hours before surgery.
 
Upon arrival in the operating theatre, intravenous access was established and 1 to 2 mg of intravenous midazolam was administered followed by an infusion of dexmedetomidine. The dose of dexmedetomidine was titrated according to the calculated lean body weight (LBW) using the Hume formula.5 The starting dose of the dexmedetomidine was 0.2 µg/kg/h using LBW.6 No loading dose was given.
 
Standard monitoring was applied to the patient together with a bispectral index (BIS) monitor and peripheral nerve stimulation monitor. Graduated compression stockings and sequential compression devices were used for all patients. Induction of anaesthesia was accomplished with fentanyl 100 µg, a titrated dose of propofol, and either suxamethonium or rocuronium as appropriate. The trachea was intubated and patients were ventilated with a mixture of air, oxygen, and desflurane.
 
Intra-operatively, desflurane was titrated to maintain BIS value between 40 and 60. Muscle relaxation was maintained with a rocuronium infusion to keep a train-of-four count of 1. Dexmedetomidine infusion continued at 0.2 µg/kg/h or higher if necessary. Shortly after induction, the various supplementary analgesic drugs were given. A loading dose of ketamine 0.3 mg/kg LBW was given followed by intermittent boluses roughly equivalent to 0.2 to 0.3 mg/kg/h of LBW. Intravenous parecoxib 40 mg and tramadol 100 mg were given. Dexamethasone 8 mg and tropisetron 5 mg were given intravenously for prophylaxis of postoperative nausea and vomiting (PONV).
 
For intravenous fluids, patients were given 10 mL/kg actual body weight of either lactated Ringer’s solution or normal saline, then more were given as appropriate. Hypotension was treated with either ephedrine or phenylephrine.
 
When the surgeon started to close the wounds, rocuronium infusion was stopped and dexmedetomidine infusion rate was reduced to 0.1 µg/kg/h. Wounds were infiltrated with 20 mL of 0.5% levobupivacaine. When all wounds were closed, dexmedetomidine infusion was stopped and desflurane switched off, muscle relaxation reversed by neostigmine and atropine. Patients were extubated when awake and able to obey command.
 
After extubation, patients were transferred to the post-anaesthetic care unit (PACU) for observation for 30 minutes, or longer if appropriate. If a patient required rescue analgesia, intravenous pethidine 20 mg with intravenous ketamine 5 mg was given, and the dose repeated if necessary. When 10 mg of intravenous ketamine had been given, further rescue analgesia was intravenous pethidine 20 mg without any more ketamine. This avoided administration of too much ketamine in an awake patient causing dizziness or hallucinations. When patients had good pain control and stable vital signs, they were transferred back to the ward. The standard postoperative protocol was initiated: if patients requested analgesics, an intramuscular injection of pethidine 50 mg was given, and repeated after 4 hours if necessary. By early evening, when vital signs were stable, patients were allowed sips of water followed by a fluid diet of 60 mL/h. Regular oral paracetamol and etoricoxib were given, and oral pregabalin was added to the protocol the next day. Opioid requirements were reviewed for 24 hours after surgery.
 
As part of the standard postoperative protocol, patients were asked to get off the bed and walk around the ward with the assistance of nursing or physiotherapy staff by the evening of the day of surgery. Provided there were no complications, patients were discharged on the second postoperative day. The anaesthesia protocol is summarised in Table 1.
 

Table 1. Anaesthesia protocol
 
Results
Patient characteristics are shown in Table 2, and postoperative opioid requirements are listed in Table 3.
 

Table 2. Patient characteristics (n=30)
 

Table 3. Postoperative opioid requirements (n=30)
 
Of the 30 patients, no opioid rescue analgesia was required in 14 (46.7%) throughout the postoperative period; six (20%) required intravenous pethidine for rescue analgesia in the PACU, but not after their return to the ward. The remaining 10 (33.3%) patients were given intramuscular pethidine injections in the ward on request.
 
The mean postoperative opioid requirement per patient in the whole case series was 32 mg of pethidine. Among the 16 patients who required rescue analgesia in the ward or in the PACU, their mean opioid requirement was 60 mg of pethidine, with a range of 20 to 150 mg.
 
This anaesthetic protocol included a dexmedetomidine infusion that might cause hypotension and bradycardia due to its alpha-2 adrenoceptor blocking action. In our case series, 11 (36.7%) patients developed transient hypotension despite intravenous fluid loading and required either intravenous ephedrine or phenylephrine. One patient had transient intra-operative bradycardia requiring atropine, probably due to preoperative use of a beta blocker and low resting heart rate.
 
Discussion
Importance of reducing postoperative opioid use in obese patients
Opioids are among the world’s oldest known drugs. They have been used in anaesthesia traditionally as part of a balanced anaesthesia, to provide hypnosis and analgesia, to blunt the sympathetic response to surgery, and are the mainstay of postoperative analgesia in many situations. Morbidly obese patients, however, are particularly sensitive to the respiratory depressant effects of opioids. Taylor et al2 found that the use of opioids per se is a risk factor for respiratory events in the first 24 hours after surgery. Ahmad et al1 demonstrated in their study of 40 morbidly obese patients who presented for laparoscopic bariatric surgery, that in using desflurane and remifentanil-morphine–based anaesthesia, hypoxaemic episodes in the first 24 hours were common, and 14 of their 40 patients had more than five hypoxic episodes per hour despite supplementary oxygen.
 
Another concern with use of opioids in bariatric patients is the high incidence (>70%) of OSAS.7 In our study, 30% (n=9) of patients had OSAS confirmed by an overnight sleep study. The remaining patients were not tested although many had varying symptoms of OSAS. These untested patients were assumed to have OSAS unless proven otherwise. The American Society of Anesthesiologists recommends that in patients with OSAS, methods should be used to reduce or eliminate the requirement for systemic opioids.8 Hence, reducing perioperative opioid use by these obese patients can potentially reduce morbidity.
 
How can the anaesthetist avoid or reduce the use of perioperative opioids, and yet still provide balanced anaesthesia with hypnosis, analgesia, haemodynamic stability, and satisfactory postoperative analgesia? The first method is to combine general anaesthesia with regional analgesia techniques, such that anaesthetic agents will provide hypnosis while the regional blocks will provide analgesia and block sympathetic responses to surgery. Any form of major regional block in a morbidly obese patient can be technically challenging, however. Furthermore, with respect to bariatric surgery, most procedures are now performed laparoscopically, so that thoracic epidural analgesia techniques have become largely unnecessary.
 
Putting aside the use of regional analgesia, the second method to reduce perioperative opioid use is to use a combination of non-opioid agents with volatile agents or propofol to achieve analgesia and haemodynamic control.3 A point to note here is that as acute tolerance to the analgesic effects of opioids can rapidly develop (such as after 90 minutes of remifentanil infusion),9 any attempts to reduce postoperative opioid requirement must include an effort to either eliminate or reduce the use of intra-operative opioids. These techniques are now often described as opioid-free anaesthesia or non-opioid techniques.
 
Paracetamol, NSAID, or COX-2 inhibitors, gabapentinoids, ketamine and alpha-2 agonists, when used individually, have all been shown to reduce postoperative opioid requirement and improve pain relief.10 11 12 13 14 Different combinations of these agents, together with local anaesthetic infiltration of the wounds, have been reported for bariatric surgery, as discussed below.
 
Development of the study protocol based on previous studies
In 2003, Feld et al15 described a technique of using sevoflurane combined with ketorolac, clonidine, ketamine, lignocaine, and magnesium for patients undergoing open gastric bypass. Compared with the control group where sevoflurane was used with fentanyl, they found the non-opioid group to be less sedated, with less morphine use in PACU although the total morphine use at 16 hours was not significantly different to the opioid group.
 
In 2006 Feld et al16 again described using desflurane combined with dexmedetomidine infusion, and compared it with a control group using desflurane and fentanyl, for patients undergoing open gastric bypass. In the dexmedetomidine group, there were lower pain scores and less morphine use in the PACU.
 
In 2005, Hofer et al17 described a case report of a super-obese patient weighing 433 kg who underwent open gastric bypass. No opioids were used but instead replaced with a high-dose dexmedetomidine infusion together with isoflurane.
 
As laparoscopic techniques have become more common in bariatric surgery, more studies have been carried out of non-opioid anaesthetic techniques for laparoscopic bariatric surgery. Tufanogullari et al18 described a technique in which either fentanyl or varying doses of dexmedetomidine were used with desflurane for laparoscopic bariatric surgery. All patients were also given celecoxib. Postoperatively, patients were given fentanyl boluses in PACU, then intravenous morphine via a patient-controlled analgesia system. The only statistical difference was decreased PACU fentanyl use in the dexmedetomidine groups.
 
Ziemann-Gimmel et al19 looked at 181 patients undergoing laparoscopic gastric bypass. In the treatment group, volatile anaesthetics were used together with intravenous paracetamol and ketorolac. Postoperatively patients were given regular paracetamol and ketorolac. If there was breakthrough pain, intermittent oral oxycodone or intravenous hydromorphone was given. A small number of patients in this treatment group (3/89) were able to remain opioid-free throughout, and 15 patients did not require opioid medications when they were back to the ward.
 
In another study where the primary outcome was the incidence of PONV, Ziemann-Gimmel et al20 evaluated 119 patients undergoing laparoscopic bariatric surgery. The treatment group was managed with propofol infusion, dexmedetomidine infusion, paracetamol, ketorolac, and ketamine. The other group was managed with volatile anaesthetic and opioids. Postoperative analgesia regimen was the same as the previous study.19 They reported a large reduction in PONV in their treatment group.
 
While most studies reported decreased requirement of opioids for postoperative analgesia in their non-opioid groups, very few studies could achieve zero postoperative opioid use. Only Ziemann-Gimmel et al19 could achieve total opioid sparing in a small proportion (3 out of 92 patients) of the treatment group by using intra-operative and postoperative intravenous paracetamol and ketorolac.
 
Most of these earlier studies used a combination of only a few of the available non-opioid adjuncts. Dexmedetomidine remains a mainstay of non-opioid adjunct in most of these studies. We hence proposed the use of a wider mix of non-opioid adjuncts, using a combination of paracetamol, COX-2 inhibitor, pregabalin, ketamine, dexmedetomidine, and local anaesthesia infiltration. In contrast to the earlier studies, in our study we were able to achieve zero postoperative opioid use in a significant percentage of patients (46.7%).
 
In our protocol, the only opioid given during anaesthesia was fentanyl 100 µg for intubation, and tramadol 100 mg, a weak opioid, shortly after induction. All other opioid analgesics, if required, were given after the patient was awake. This avoided having to blindly give intra-operative long-acting opioids during anaesthesia, and allowed better titration of the drug by giving small boluses each time with the patient awake.
 
Dexmedetomidine
Dexmedetomidine was a useful agent in our protocol. Before the addition of this agent to our protocol, total opioid sparing was very difficult to achieve. Dexmedetomidine is a highly selective alpha-2 adrenoceptor blocker, with analgesic and sedative properties.21 Previous study of its use in bariatric anaesthesia has failed to show any reduction in opioid requirements.18 In our protocol, we used more non-opioid adjuncts, and since we calculated the infusion dose using LBW instead of total body weight (TBW), overall we administered a much lower dose of dexmedetomidine.
 
Infusion of dexmedetomidine may cause initial hypertension and tachycardia (especially during a loading dose infusion), followed by hypotension and bradycardia. In our study, no loading dose was given. Of the 30 patients, 11 (36.7%) developed transient hypotension despite intravenous fluid loading and required either intravenous ephedrine or phenylephrine. This transient hypotension was also aggravated by putting the patient in a steep reverse trendelenburg position to facilitate surgical exposure, which decreases the venous return. When using dexmedetomidine in bariatric surgery, care must be taken to ensure the patient is euvolaemic.
 
Ketamine
Ketamine was another useful adjunct in our protocol. Ketamine is an N-methyl-D-aspartate receptor antagonist with strong analgesic properties when given at subanaesthetic doses.22 The use of ketamine has advantages in morbidly obese patients as it causes little respiratory depression compared with opioids. In our protocol, we used LBW to calculate the ketamine dose, and used relatively low ketamine doses (0.3 mg/kg bolus followed by 0.2-0.3 mg/kg/h with intermittent boluses). This resulted in a low total ketamine dose, with a mean of 31 mg ketamine per patient (range, 25-50 mg). Midazolam 1 to 2 mg was also given at induction to prevent any psychomimetic reactions caused by ketamine. No patient developed any hallucinations or dysphoria and there was no delay in emergence noted in our patients.
 
The use of lean body weight as dosing scalar
In our protocol, we chose to use LBW to calculate the dose for dexmedetomidine and ketamine. The classic teaching is that for obese patients, anaesthetic drugs can be dosed according to the TBW versus ideal body weight or LBW according to lipid solubility. Lipophilic drugs are better dosed according to actual body weight due to an increase in volume of distribution, whereas hydrophilic drugs are better dosed according to LBW or ideal body weight.23 Lean body weight is significantly correlated with cardiac output, and drug clearance increases proportionately with LBW.6
 
There is insufficient information regarding the pharmacokinetics and pharmacodynamics of dexmedetomidine and ketamine in the morbidly obese patient. In the few previous studies regarding dexmedetomidine and bariatric anaesthesia, TBW was used as dosing scalars. For example, Feld et al16 used 0.5 µg/kg TBW loading dose followed by 0.4 µg/kg/h infusion in their series of 10 patients with open gastric bypass. Ziemann-Gimmel et al20 used 0.5 µg/kg TBW loading dose followed by 0.1 to 0.3 µg/kg/h infusion for their group of 60 patients undergoing a variety of bariatric procedures. Tufanogullari et al18 gave no loading dose and infused from 0 to 0.8 µg/kg/h in their series of 80 patients undergoing laparoscopic banding or bypass. There were little data regarding ketamine dose in bariatric surgery. We chose to dose these two drugs using LBW to see how our results would differ from the other published studies.
 
Limitations of the study
Our study has several limitations. It was a prospective observational study with a relatively small number of cases. We do not have data to compare this protocol with our previous protocols, nor do we have data in the form of a randomised controlled trial to look at the isolated effect of any of the drugs used.
 
The opioid that we used for rescue analgesia was pethidine, given intravenously in the recovery room by the anaesthetist, or given intramuscularly on the ward by the nurses upon standing order. One can argue that the mean opioid dose per patient was not accurate as some were given small intravenous boluses and others were given intramuscular injections of fixed dose. To accurately assess the postoperative parenteral opioid requirements in theory, all patients should be given a patient-controlled analgesia system to deliver boluses of parenteral opioids as required. This, however, is not practical and not necessary for the patient, given that two thirds of our patients did not require any opioids at all. This would also represent a lot of drug wastage when the whole cassette of drugs was unused.
 
We were able to demonstrate that a significant proportion of patients did not require any opioids, but we do not have data to demonstrate a reduction in respiratory complications or an improvement in time to ambulation or discharge. This could be the basis for further studies.
 
Declaration
All authors have disclosed no conflicts of interest.
 
References
1. Ahmad S, Nagle A, McCarthy RJ, et al. Postoperative hypoxaemia in morbidly obese patients with and without obstructive sleep apnea undergoing laparoscopic bariatric surgery. Anesth Analg 2008;107:138-43. Crossref
2. Taylor S, Kirton OC, Staff I, Kozol RA. Postoperative day one: a high risk period for respiratory events. Am J Surg 2005;190:752-6. Crossref
3. Mulier JP. Perioperative opioids aggravate obstructive breathing in sleep apnea syndrome: mechanisms and alternative anaesthesia strategies. Curr Opin Anaesthesiol 2016;29:129-33. Crossref
4. Alvarez A, Singh PM, Sinha AC. Postoperative analgesia in morbid obesity. Obes Surg 2014;24:652-9. Crossref
5. Hume R. Prediction of lean body mass from height and weight. J Clin Pathol 1966;19:389-91. Crossref
6. Ingrande J, Lemmens HJ. Dose adjustment of anaesthetics in the morbidly obese. Br J Anaesth 2010;105 Suppl 1:i16-23. Crossref
7. Lopez PP, Stefan B, Schulman CI, Byers PM. Prevalence of sleep apnea in morbidly obese patients who presented for weight loss surgery evaluation: more evidence for routine screening for obstructive sleep apnea before weight loss surgery. Am Surg 2008;74:834-8.
8. American Society of Anesthesiologists Task Force on Perioperative Management of patients with obstructive sleep apnea. Practice guidelines for the perioperative management of patients with obstructive sleep apnea: an updated report by the American Society of Anesthesiologists Task Force on Perioperative Management of patients with obstructive sleep apnea. Anesthesiology 2014;120:268-86. Crossref
9. Vinki HR, Kissin I. Rapid development of tolerance to analgesia during remifentanil infusion in humans. Anesth Analg 1998;86:1307-11. Crossref
10. Dahl JB, Nielsen RV, Wetterslev J, et al. Post-operative analgesic effects of paracetamol, NSAIDs, glucocorticoids, gabapentinoids and their combinations: a topical review. Acta Anaesthesiol Scand 2014;58:1165-81. Crossref
11. Blaudszun G, Lysakowski C, Elia N, Tramèr MR. Effect of perioperative systemic α2 agonists on postoperative morphine consumption and pain intensity: systematic review and meta-analysis of randomized controlled trials. Anesthesiology 2012;116:1312-22. Crossref
12. Cabrera Schulmeyer MC, de la Maza J, Ovalle C, Farias C, Vives I. Analgesic effects of a single preoperative dose of pregabalin after laparoscopic sleeve gastrectomy. Obes Surg 2010;20:1678-81. Crossref
13. Weinbroum AA. Non-opioid IV adjuvants in the perioperative period: pharmacological and clinical aspects of ketamine and gabapentinoids. Pharmocol Res 2012;65:411-29. Crossref
14. Alimian M, Imani F, Faiz SH, Pournajafian A, Navadegi SF, Safari S. Effect of oral pregabalin premedication on post-operative pain in laparoscopic gastric bypass surgery. Anesth Pain Med 2012;2:12-6. Crossref
15. Feld JM, Laurito CE, Beckerman M, Vincent J, Hoffman WE. Non-opioid analgesia improves pain relief and decreases sedation after gastric bypass surgery. Can J Anaesth 2003;50:336-41. Crossref
16. Feld JM, Hoffam WE, Stechert MM, Hoffman IW, Anada RC. Fentanyl or dexmedetomidine combined with desflurane for bariatric surgery. J Clin Anesth 2006;18:24-8. Crossref
17. Hofer RE, Sprung J, Sarr MG, Wedel DJ. Anesthesia for a patient with morbid obesity using dexmedetomidine without narcotics. Can J Anaesth 2005;52:176-80. Crossref
18. Tufanogullari B, White PF, Peixoto MP, et al. Dexmedetomidine infusion during laparoscopic bariatric surgery: the effect on recovery outcome variables. Anesth Analg 2008;106:1741-8. Crossref
19. Ziemann-Gimmel P, Hensel P, Koppman J, Marema R. Multimodal analgesia reduces narcotic requirements and antiemetic rescue medication in laparoscopic Roux-en-Y gastric bypass surgery. Surg Obes Relat Dis 2013;9:975-80. Crossref
20. Ziemann-Gimmel P, Goldfarb AA, Koppman J, Marema RT. Opioid-free total intravenous anaesthesia reduces postoperative nausea and vomiting in bariatric surgery beyond triple prophylaxis. Br J Anaesth 2014;112:906-11. Crossref
21. Carollo DS, Nossaman BD, Ramadhyani U. Dexmedetomidine: a review of clinical applications. Curr Opin Anaesthesiol 2008;21:457-61. Crossref
22. Gammon D, Bankhead B. Perioperative pain adjuncts. In: Johnson KB, editor. Clinical pharmacology for anesthesiology. McGraw-Hill Education; 2014: 157-78.
23. Sinha AC, Eckmann DM. Anesthesia for bariatric surgery. In: Miller RD, Eriksson LI, Fleisher LA, Wiener-Kronish JP, Young WL, editors. Miller’s anesthesia. 7th ed. Philadelphia: Churchill Livingston; 2015: 2089-104.

Seatbelt use by pregnant women: a survey of knowledge and practice in Hong Kong

Hong Kong Med J 2016 Oct;22(5):420–7 | Epub 19 Aug 2016
DOI: 10.12809/hkmj164853
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Seatbelt use by pregnant women: a survey of knowledge and practice in Hong Kong
WC Lam, MPH (CUHK), FHKAM (Obstetrics and Gynaecology)1; William WK To, MD, FHKAM (Obstetrics and Gynaecology)1; Edmond SK Ma, MD, FHKAM (Community Medicine)2
1 Department of Obstetrics and Gynaecology, United Christian Hospital, Kwun Tong, Hong Kong
2 The Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, Hong Kong
 
Corresponding author: Dr WC Lam (lamwc2@ha.org.hk)
 
 Full paper in PDF
 
Abstract
Introduction: The use of motor vehicles is common during pregnancy. Correct seatbelt use during pregnancy has been shown to protect both the pregnant woman and her fetus. This survey aimed to evaluate the practices, beliefs, and knowledge of Hong Kong pregnant women of correct seatbelt use, and identify factors leading to reduced compliance and inadequate knowledge.
 
Methods: A self-administered survey was completed by postpartum women in the postnatal ward at the United Christian Hospital, Hong Kong, from January to April 2015. Eligible surveys were available from 495 women. The primary outcome was the proportion of pregnant women who maintained or reduced seatbelt use during pregnancy. Secondary outcomes were analysed and included knowledge of correct seatbelt use, as well as contributing factors to non-compliance and inadequate knowledge.
 
Results: There was decreased compliance with seatbelt use during pregnancy and the decrease was in line with increasing gestation. Pregnant women’s knowledge about seatbelt use was inadequate and only a minority had received relevant information. Women who held a driving licence and had a higher education level were more likely to wear a seatbelt before and during pregnancy. Women with tertiary education or above knew more about seatbelt use.
 
Conclusions: Public health education for pregnant women in Hong Kong about road safety is advisable, and targeting the lower-compliant groups may be more effective and successful.
 
 
New knowledge added by this study
  • There was decreased compliance with seatbelt use by pregnant women in Hong Kong. The decrease in compliance became more pronounced as gestation increased. This may be related to lack of relevant information and misconceptions.
Implications for clinical practice or policy
  • As a form of public health and road traffic safety promotion, information about seatbelt use during pregnancy should be provided to pregnant women, health care workers, and all road traffic users.
 
 
Introduction
Road traffic safety is an important public health issue. Health care professionals are usually involved in the treatment of road traffic accident victims rather than prevention of their occurrence or minimising the severity of injury. Education about and promotion of road traffic safety is important for all; pregnant women are no exception. Safety issues relate to both the mother and her fetus, and different information and/or a different approach may be required. With any kind of intervention during pregnancy, an emphasis on the safety of the fetus may improve compliance.
 
The number of pregnant drivers in Hong Kong is unknown, but the use of motor vehicles including private car, taxi, and public light bus is common during pregnancy. To promote maternal seatbelt use among the local pregnant population, information about their beliefs is essential.
 
Correct seatbelt use during pregnancy has been shown to protect both the pregnant woman and her fetus. There is evidence that pregnant women who do not wear a seatbelt and who are involved in a motor vehicle accident are more likely to experience excessive bleeding and fetal death.1 2 3 Compliance and proper use of the seatbelt are crucial. Incorrect placement of the seatbelt and a subsequent accident may result in fetal death due to abruptio placentae.4 The three-point restraint (ie shoulder harness in addition to a lap belt) provides more protection for the fetus than a lap belt alone. Previous studies have revealed incorrect positioning of the seatbelt in 40% to 50% of pregnant women.5 6 Various other studies have shown reduced seatbelt compliance during pregnancy.7 The proportion of seatbelt use has been reported to be around 70% to 80% before pregnancy, but reduced by half at 20 weeks or more of gestation.5 7 There is also evidence that pregnant women lack information about the proper use of a seatbelt and its role in preventing injury: only 14% to 37% of pregnant women received advice from health care professionals.5 6 7 8 The common reasons for not using a seatbelt have been reported to include discomfort, inconvenience, forgetfulness, and fear of harming the fetus.9
 
In this study, the current practice and knowledge of Hong Kong pregnant women about seatbelt use was surveyed, and any determining factors were identified. The results will enable public health education and promotion to be targeted to at-risk groups to improve road traffic safety among local pregnant women.
 
Methods
Study design
This was a cross-sectional survey using a convenient sampling carried out from January to April 2015. A self-administered questionnaire was distributed to postpartum women in the postnatal ward of United Christian Hospital (UCH) in Hong Kong. Participation in the survey was entirely voluntary.
 
Questionnaires were analysed if at least 50% of questions were answered, including the main outcomes. Those from women who did not understand the content or who did not understand Chinese or English were excluded.
 
Questionnaire
The questionnaire was based on a pilot study, with questions revised after review. It was available in English and Chinese (traditional and simplified) versions and was divided into four parts. The first part included demographic and pregnancy information and driving experience. The second part focused on practice of seatbelt use before and during pregnancy, any change in habit with progression of pregnancy, and the reason(s) for non-use of a seatbelt. The third part related to awareness and knowledge of the Road Traffic Ordinance on seatbelt use and the correct use of both lap and shoulder belts. Text descriptions and diagrams of different restraint positions were provided. The correct way is to place the lap belt below the abdomen and the shoulder belt diagonally across the chest. The diagram of restraint positions were adopted from the leaflet “Protect your unborn child in a car” by the Transport Department of Hong Kong with permission.10 The final part asked whether the postpartum woman had received any advice about seatbelt use during pregnancy, the source of information, and whether they thought such information was useful and/or relevant.
 
Statistical analysis
Sample size calculation
Using the results of overseas studies as reference, the sample size was calculated according to the assumption that around 80% of Hong Kong pregnant women use a seatbelt. A previous questionnaire survey among postpartum women at a local hospital indicated that a response rate of approximately 80% could be expected.11 We assumed the margin of error that could be accepted to be 4%, with a confidence level of 95%, and using this formula: n = z2 x p x (1-p)/d2 (where p = proportion of wearing seatbelt [0.8]; d = margin of error [0.04]; and z value = 1.96), the adjusted sample size was 481.
 
All statistical analysis was performed using PASW Statistics 18 (Release Version 18.0.0; SPSS Inc, Chicago [IL], US). For categorical data, the Chi squared test was used to compare knowledge about seatbelt use in wearers and non-wearers. For continuous data with a highly skewed distribution, non-parametric test (Mann-Whitney U test for two groups and Kruskal-Wallis H test for more than two groups) was used to compare the knowledge of correct seatbelt use. Knowledge score was calculated based on the answer to questions about the Road Traffic Ordinance on seatbelt use and the proper way to use both the lap and shoulder belts. One point was given for each correct answer. The critical level of statistical significance was set at 0.05.
 
The relative effects of factors (age, marital status, education level, resident status, husband’s occupation, family monthly income, respondent’s and husband’s driving licence holder status, frequency of public transport use, and stage of pregnancy) that might influence seatbelt use during pregnancy were estimated using generalised estimating equation (GEE). The outcome variables were dichotomous correlated responses (eg use of seatbelt in different gestations), and the outcome variables were assumed to be independent. The issue about statistical significance due to lack of independence was corrected using GEE.
 
To account for the interdependence of observations, we used robust estimates of variance (GEE) by including each period of observation as a cluster. For use of a seatbelt before and during each trimester of pregnancy, since the responses were correlated as time progressed, the GEE model with working correlation matrix was adopted.12
 
Results
Demographic data
There were 769 postpartum women in the postnatal ward during the study period. A total of 550 questionnaires were distributed by convenience and the response rate was 91% with 501 questionnaires returned. The remaining women (n=49, 9%) either refused to participate or did not return the questionnaire. Among the returned questionnaires, six were excluded due to missing information on the main outcomes of the survey or they were <50% complete. At the end of the recruitment period, 495 (90%) questionnaires were valid for analysis.
 
The majority (93.5%) of respondents were aged between 21 and 40 years. Only 10 (2%) were English speakers; others (98%) spoke Cantonese or Mandarin as their first language and completed the Chinese questionnaire. With regard to education level, 188 (38%) women had received tertiary education or above, 290 (58.6%) secondary education, and 14 (2.8%) primary education. There was no existing information about any association between pregnant woman or spousal occupation and compliance with or knowledge about seatbelt use. We therefore investigated whether occupation was a relevant factor, for example, driver and health care worker. Around half (n=216, 43.6%) of the women were housewives, 57 (11.6%) were professionals, and 14 (2.8%) were medical health care workers. Among spouses, 32 (6.5%) were drivers, two (0.4%) were medical health care workers, and 122 (24.6%) were professionals. Other occupations were unrelated to transportation or health care, including clerk, construction site worker, restaurant waiter, and chef. Overall, 439 (88.7%) women were Hong Kong residents, others were new immigrants or double-entry permit holders from Mainland China. Of the respondents, 477 (96.4%) women had attended regular antenatal check-ups, and 215 (43.4%) were first-time mothers.
 
Driving experience and mode of transport
Around half of the spouses (49.1%) but only 71 (14.3%) women held a Hong Kong driving licence. Among those women with a driving licence, only 16 (22.5%) drove daily, and seven (9.9%) only at weekends. Public transport was used daily by 300 (60.6%) women. Among different means of public transport, buses (53.7%) were the most commonly used but not all seats on buses have seatbelts. In public light buses and taxis, use of a seatbelt, if available, is mandatory: 38.6% and 15.2% of respondents used public light buses and taxis, respectively.
 
Use of a seatbelt before and during pregnancy
Of the respondents, 379 (76.6%) pregnant women reported using a seatbelt in the 6 months before pregnancy, but compliance was reduced as pregnancy progressed. Seatbelt use was reduced to 73.5% in the first trimester, 70.5% in the second trimester, and 67.1% in the third trimester (Table 1). There were 26 women who changed their behaviour from not wearing a seatbelt prior to pregnancy to wearing one after they became pregnant. Therefore the total number of ever seatbelt users was 405. Analysis of the knowledge score was performed by excluding these 26 women; the result showed a similar finding and statistical significance.
 

Table 1. Use of seatbelt before and during pregnancy
 
Reasons for not using a seatbelt during pregnancy
With regard to the reasons for not using a seatbelt at any time during pregnancy, 156 (89.1%) of 175 women stated that the seatbelt caused discomfort, 22 (12.6%) thought seatbelts were not useful, and 79 (45.1%) worried that they would cause harm to the fetus (Table 2). Apart from the three stated options in the questionnaire, several respondents stated that the travelling distance was usually short on public light buses and the time taken to buckle up and unfasten the seatbelt may delay other passengers. Other women admitted to being lazy or forgetful, or were just not in the habit of using a seatbelt. They also found seatbelts inconvenient because those on public transport were “not user-friendly”, “too short”, or were “dirty” (Table 2).
 

Table 2. Reasons for not using a seatbelt
 
Knowledge of seatbelt use during pregnancy
Of the respondents, 216 (43.6%) correctly answered that pregnant women are not exempted from seatbelt use according to the Road Traffic Ordinance. The remaining 56.4% either answered wrongly or did not know the answer. Approximately 52.7% women correctly pointed out that appropriate use of a seatbelt will not harm the fetus. Although around half of the women wrongly believed that pregnant women are exempted from seatbelt legislation or that use of a seatbelt will harm the fetus, 358 (72.3%) stated that pregnant women should wear a seatbelt. When the three-point seatbelts were shown on the diagrams, 403 (81.4%) women could identify the correct way of wearing the seatbelt with the lap strap placed below the bump, not over it (Table 3).
 

Table 3. Knowledge of seatbelt use (seatbelt users vs non-users)
 
Among all the respondents, 90 (18.2%) women never wore a seatbelt, and the other 405 (81.8%) were seatbelt users either before or during pregnancy. Comparison of responses revealed that never wearers of a seatbelt had significantly poorer knowledge in three of the four questions about seatbelt use during pregnancy (P<0.05) [Table 3].
 
Information about seatbelt use during pregnancy
Information about seatbelt use had been received by only 32 (6.5%) women. Among them, 13 (40.6%) had derived the information from the internet, others from staff of a government and private clinic, magazine, and publications of Transport Department. Seven (21.9%) received information from friends or family members; one had a car accident during pregnancy and was given relevant information by health care workers at the Accident and Emergency Department. Most (n=426, 86%) women expressed the view that information about seatbelt use during pregnancy was useful and necessary.
 
Factors influencing use of seatbelt during pregnancy
Among all potential factors, women who held a driving licence (odds ratio [OR]=3.28; P=0.004) or had a higher level of education (OR=2.13; P<0.001) were more likely to use a seatbelt. Considering time as another variable, as pregnancy progressed women were significantly less likely to use a seatbelt (OR=0.84; P<0.001) [Table 4].
 

Table 4. Determining factors influencing use of a seatbelt before and during pregnancy
 
Factors influencing knowledge about correct seatbelt use
Women with a lower education level (P<0.001) were less aware of the Road Traffic Ordinance on seatbelt use, the protective effects of a seatbelt during pregnancy, and the correct way to position both the lap and shoulder belts (Table 5).
 

Table 5. Determining factors influencing knowledge score for correct seatbelt use
 
Discussion
Main findings
In this study, 76.6% of Hong Kong pregnant women were consistent seatbelt wearers before pregnancy; this is similar to overseas studies which reported 70% to 80%.5 7 Compliance was reduced during all trimesters, and decreased as gestation progressed. Only 26 women changed their behaviour from non-users to users after becoming pregnant. It also demonstrated the misconception about the effects of seatbelt use on pregnancy and the fetus. Pregnant women’s knowledge about seatbelt use was inadequate and only a minority had received relevant information. Women who held a driving licence or had a higher education level were more likely to wear a seatbelt before and during pregnancy. Women with a tertiary education or above were more knowledgeable about seatbelt use.
 
Strengths and limitations
As far as we know, this is the first survey in Hong Kong of the knowledge of pregnant women about seatbelt use and their associated practice, with a reasonably high response rate. One limitation of the study was that the questionnaire was not validated and there were overlapping categories for numerical variables. Results and experience in this study can serve to revise the questions for a future study with improved validity and reliability. During the study period, 769 postpartum women stayed in the postnatal ward and 495 (64%) completed questionnaires were collected. The proportion included was relatively high, but still the method of convenient sampling may have affected the representativeness of the sampled subjects. Moreover this was a single-centre survey in the obstetric unit of a district hospital. The UCH provides obstetric services to the population in the Kowloon East region. The geographical location of a clinic could dictate the mode of travelling to attend antenatal hospital appointments. Although taxis and public light buses are the usual mode of transport, some women may have taken the bus or Mass Transit Railway, and these do not require use of a seatbelt. Furthermore, the delivery rate at UCH was less than 10% of the total deliveries in Hong Kong, therefore the results may not be applicable to other clusters with patients of different education levels, driving experience, and transportation habits.
 
In addition, those who were unable to read or understand Chinese or English were excluded. These were usually illiterate or non–Hong Kong residents, and may be the group with the lowest compliance and poorest knowledge about seatbelt use. There were 49 women who refused to participate and six who did not complete the questionnaire; this 10% also introduced inaccuracy and bias in our data. Reporting bias is another concern. Discrepancies between observed and self-reported seatbelt use were found in a previous study.13 Anonymity of the questionnaires might have minimised reporting bias. Although all demographic variables included in the questionnaire were analysed, there were other potential confounders that might have affected the knowledge score and the use of a seatbelt during pregnancy. These were not investigated and hence not adequately adjusted in the knowledge score analysis or in the GEE model, for example prior traffic accidents in the respondents and their family members, risk-taking behaviours such as smoking, alcohol drinking, and drug use. Finally, multivariate instead of univariate analysis of the factors affecting knowledge score could be performed to investigate the relationship among different variables.
 
Interpretation
Prevention plays a major role in ensuring maternal and fetal survival in road traffic accidents. Motor vehicle crashes are responsible for severe maternal injury and fetal loss. Despite existing knowledge about the protective effects of wearing a seatbelt, pregnant women remain poorly compliant. This was confirmed in this local survey and in overseas studies.14 15
 
In the Report on Confidential Enquiries into Maternal Deaths in the United Kingdom 1994-1996 published by the Royal College of Obstetricians and Gynaecologists, 13 pregnant women died as a result of road traffic accidents. One of the victims did not use a seatbelt and was forcibly ejected from the vehicle.16 Ten years later, in a more recent Report on Confidential Enquiries into Maternal Deaths in the United Kingdom 2006-2008,17 there were 17 pregnant women who died as a result of road traffic accidents. A specific recommendation was made in the report: “All women should be advised to wear a 3-point seat belt throughout pregnancy, with the lap strap placed as low as possible beneath the ‘bump’ lying across the thighs and the diagonal shoulder strap above the ‘bump’ lying between the breasts. The seat belt should be adjusted to fit as snugly and comfortably as possible, and if necessary the seat should be adjusted”.17
 
According to the Road Traffic Ordinance in Hong Kong, drivers and passengers must wear seatbelts where provided. The exceptions are when reversing a vehicle, making a three-point turn, manoeuvring in and out of a parking place, and those who have a medical certificate and have been granted an exemption on medical grounds by the Commissioner for Transport.18 According to a report of the Transport Department of Hong Kong, the total number of road traffic accidents was 14 436 in 2003. In 2013, the number rose to 16 089. The number of pregnant women involved or injured in road traffic accidents is unknown.19 The Hong Kong SAR Government revises seatbelt legislation regularly to enhance road safety. Since 1 January 2001, passengers have been required to wear a seatbelt, if available, in the rear of taxis as well as in the front. Since 1 August 2004, passengers on public light buses have also been required to wear a seatbelt where one is fitted.20 21 Stickers were put inside buses and taxis to remind passengers of their responsibility to wear a seatbelt and to give clear instructions on the correct way to wear it. Nonetheless, the requirement to use a seatbelt and its protective effects were not well recognised among the respondents in this survey. This may be due to the lack of information provision as only 6.5% of women had received information related to seatbelt use in pregnancy.
 
In this study, those with a lower education level had poorer knowledge about seatbelt use in pregnancy. Effective public education should target these women. Using diagrams as instruction can be simple and direct so that those with a lower education level or who only use public transportation occasionally can easily understand and follow the advice. In the past, leaflets or stickers about seatbelt use were widely seen, especially after introduction of the new legislation, but those specifically targeted to the pregnant population were not common. Maternal child health centres and antenatal clinics of government hospitals are ideal places to distribute educational material. Television announcements may also convey the message effectively, not only to pregnant women, but to all road traffic users. It is also a good opportunity to inform drivers and other passengers so that they can help pregnant women as well as the elderly and disabled who use public transport. Regular spot-checks on public transport and law enforcement may also encourage compliance with seatbelt use. The majority of doctors and midwives give advice about seatbelt use only if asked. This survey demonstrated that the proportion of pregnant women who received seatbelt information was very small. It is recommended that written instructions and advice should be available from well-informed health care professionals, and pregnant women should always be encouraged to wear a correctly positioned seatbelt. Obstetricians, midwives, and general practitioners play an important role in disseminating information. A study in Ireland showed that 75% of general practitioners believed women should wear seatbelts in the third trimester, although only 30% provided regular advice and fewer than 50% indicated that they were aware of the correct advice to give.22
 
Conclusions
This study demonstrated decreased compliance with seatbelt use during pregnancy that continued to decrease as pregnancy progressed. Women with a lower education level or without a driving licence were less likely to use a seatbelt during pregnancy. The former were also less aware of the Road Traffic Ordinance on seatbelt use and the correct way to position both the lap and shoulder belts. Only a minority of pregnant women had received information about seatbelt use. Future studies to assess the knowledge of Hong Kong health care workers about use of seatbelts in pregnancy may enhance the awareness and involvement of medical professionals in educating pregnant women on this issue. Publicity and education about road safety by health care providers and the government are advised, and targeting the lower compliant groups may be more effective and successful.
 
Acknowledgements
The authors gratefully acknowledge Mr Edward Choi for his valuable statistical advice, the staff in the postnatal ward of UCH for helping to collect the questionnaires, and the Transport Department of Hong Kong for permission to use the diagram of restraint positions adopted from the leaflet “Protect your unborn child in a car” on the questionnaires.
 
Declaration
All authors have disclosed no conflicts of interest.
 
References
1. Hyde LK, Cook LJ, Olson LM, Weiss HB, Dean JM. Effect of motor vehicle crashes on adverse fetal outcomes. Obstet Gynecol 2003;102:279-86. Crossref
2. Wolf ME, Alexander BH, Rivara FP, Hickok DE, Maier RV, Starzyk PM. A retrospective cohort study of seatbelt use and pregnancy outcome after a motor vehicle crash. J Trauma 1993;34:116-9. Crossref
3. Klinich KD, Schneider LW, Moore JL, Pearlman MD. Injuries to pregnant occupants in automotive crashes. Annu Proc Assoc Adv Automot Med 1998;42:57-91.
4. Bunai Y, Nagai A, Nakamura I, Ohya I. Fetal death from abruptio placentae associated with incorrect use of a seatbelt. Am J Forensic Med Pathol 2000;21:207-9. Crossref
5. Jamjute P, Eedarapalli P, Jain S. Awareness of correct use of a seatbelt among pregnant women and health professionals: a multicentric survey. J Obstet Gynaecol 2005;25:550-3. Crossref
6. Johnson HC, Pring DW. Car seatbelts in pregnancy: the practice and knowledge of pregnant women remain causes for concern. BJOG 2000;107:644-7. Crossref
7. Ichikawa M, Nakahara S, Okubo T, Wakai S. Car seatbelt use during pregnancy in Japan: determinants and policy implications. Inj Prev 2003;9:169-72. Crossref
8. Taylor AJ, McGwin G Jr, Sharp CE, et al. Seatbelt use during pregnancy: a comparison of women in two prenatal care settings. Matern Child Health J 2005;9:173-9. Crossref
9. Weiss H, Sirin H, Levine JA, Sauber E. International survey of seat belt use exemptions. Inj Prev 2006;12:258-61. Crossref
10. Transport Department, The Government of the Hong Kong Special Administrative Region. Protect your unborn child in a car. Available from: http://www.td.gov.hk/filemanager/en/content_174/belt-e.pdf. Accessed Aug 2016.
11. Yu CH, Chan LW, Lam WC, To WK. Pregnant women’s knowledge and consumption of long-chain omega-3 polyunsaturated fatty acid supplements. Hong Kong J Gynaecol Obstet Midwifery 2014;14:57-63.
12. Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika 1986;73:13-22. Crossref
13. Robertson LS. The validity of self-reported behavioral risk factors: seatbelt and alcohol use. J Trauma 1992;32:58-9.Crossref
14. Luley T, Fitzpatrick CB, Grotegut CA, Hocker MB, Myers ER, Brown HL. Perinatal implications of motor vehicle accident trauma during pregnancy: identifying populations at risk. Am J Obstet Gynecol 2013;208:466.e1-5. Crossref
15. Grossman NB. Blunt trauma in pregnancy. Am Fam Physician 2004;70:1303-10.
16. Chapter 13: Fortuitous deaths. Why mothers die: report on confidential enquiries into maternal deaths in the United Kingdom 1994-1996. London: Royal College of Obstetrics and Gynaecologists Press; 2001.
17. Cantwell R, Clutton-Brock T, Cooper G, et al. Saving Mothers’ Lives: Reviewing maternal deaths to make motherhood safer: 2006-2008. The Eighth Report of the Confidential Enquiries into Maternal Deaths in the United Kingdom. BJOG 2011;118 Suppl 1:1-203. Crossref
18. Transport Department, The Government of the Hong Kong Special Administrative Region. Be Smart, buckle up. Available from: http://www.td.gov.hk/filemanager/en/content_174/seatbelt_leaflet.pdf. Accessed Aug 2016.
19. Transport Department, The Government of the Hong Kong Special Administrative Region. Road Traffic Accident Statistics Year 2013. Available from: http://www.td.gov.hk/en/road_safety/road_traffic_accident_statistics/2013/index.html. Accessed Aug 2016.
20. Transport Department, The Government of the Hong Kong Special Administrative Region. Seat belt: safe motoring guides. Available from: http://www.td.gov.hk/en/road_safety/safe_motoring_guides/seat_belt/index.html. Accessed Aug 2016.
21. Transport Department, The Government of the Hong Kong Special Administrative Region. Road Safety Bulletin; March 2001. Available from: http://www.td.gov.hk/filemanager/en/content_182/rs_bulletin_04.pdf. Accessed Aug 2016.
22. Wallace C. General practitioners knowledge of and attitudes to the use of seat belts in pregnancy. Ir Med J 1997;90:63-4.

Pages