Anterior cruciate ligament tear in Hong Kong Chinese patients

Hong Kong Med J 2015 Apr;21(2):131–5 | Epub 19 Dec 2014
DOI: 10.12809/hkmj134124
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Anterior cruciate ligament tear in Hong Kong Chinese patients
August WM Fok, FHKCOS, FHKAM (Orthopaedic Surgery); WP Yau, FHKCOS, FHKAM (Orthopaedic Surgery)
Division of Sports and Arthroscopic Surgery, Department of Orthopaedics and Traumatology, Queen Mary Hospital, The University of Hong Kong, Hong Kong
 
Corresponding author: Dr August WM Fok (augustfok@hotmail.com)
 Full paper in PDF
Abstract
Objective: To investigate the associations between patient sex, age, cause of injury, and frequency of meniscus and articular cartilage lesions seen at the time of the anterior cruciate ligament reconstruction.
 
Design: Case series.
 
Setting: University affiliated hospital, Hong Kong.
 
Patients: Medical notes and operating records of 672 Chinese patients who had received anterior cruciate ligament reconstruction between January 1997 and December 2010 were reviewed. Data concerning all knee cartilage and meniscus injuries documented at the time of surgery were analysed.
 
Results: Of the 593 patients, meniscus injuries were identified in 315 (53.1%). Patients older than 30 years were more likely to suffer from meniscal injury compared with those younger than 30 years (60% vs 51%, P=0.043). Longer surgical delay was observed in patients with meniscal lesions compared with those without (median, 12.3 months vs 9.1 months, P=0.021). Overall, 139 cartilage lesions were identified in 109 (18.4%) patients. Patients with cartilage lesions were significantly older than those without the lesions (mean, 27.6 years vs 25.1 years, P=0.034). Male patients were more likely to have chondral injuries than female patients (20.1% vs 10.9%, P=0.028). The risk of cartilage lesions was increased by nearly 3 times in the presence of meniscal tear (P<0.0001; odds ratio=2.7; 95% confidence interval, 1.7-4.2).
 
Conclusions: Increased age and surgical delay increased the risk of meniscal tears in patients with anterior cruciate ligament tear. Increased age, male sex, and presence of meniscal tear were associated with an increased frequency of articular lesions after an anterior cruciate ligament tear.
 
New knowledge added by this study
  •  This study served to identify the risk factors for meniscal and cartilage injuries in patients with anterior cruciate ligament (ACL) tear.
Implications for clinical practice or policy
  •  Patients with ACL deficiency should be informed about the increased risk of meniscus injuries associated with surgical delay.
 
 
Introduction
Anterior cruciate ligament (ACL) tear is one of the commonest sport injuries seen in clinical practice, and such injury is often associated with meniscal and chondral lesions. It is widely believed that early surgery can prevent such lesions in ACL-deficient patients, and probably help avoid the most dreadful complication of early osteoarthritis of the knee.1 Despite multiple studies conducted to evaluate the relationship between intra-articular injuries and ACL tear, such associations among Asians, especially Chinese, have not been extensively studied. Data show that females are more susceptible to ACL injury than their male counterparts,2 3 4 but lower risk of other intra-articular injuries in females was observed in some studies.5 Furthermore, a study showed that the incidence of meniscus tear was associated with the mechanism of ACL injury6; however, other studies were not able to show a significant relationship between the type of sports causing injury and the incidence of meniscal and chondral lesions.7 The objective of this study was two-fold. Our first aim was to report the meniscal and chondral lesions that accompany ACL tears in a large Chinese population. Our second aim was to test for relationships between the aforementioned lesions and patient sex, age, surgical delay, and causes of ACL injury.
 
Methods
A database that recorded all patients who had received ACL reconstruction in our hospital since 1997 was reviewed. Overall, 672 Chinese patients who had received the surgery between January 1997 and December 2010 were identified. Their medical notes and operating records were reviewed. Data concerning the patient sex, age, causes of injury, elapsed time from injury to surgery, and all knee cartilage and meniscus injuries documented at the time of surgery were analysed.
 
Exclusion criteria were: patients who had radiological evidence of osteoarthritis (Kellgren-Lawrence grade 3 or 4); a concomitant grade III medial collateral ligament, lateral collateral ligament, or posterior cruciate ligament deficiency (evaluated and recorded by means of examination with the patient under anaesthesia at the time of surgery); any revision procedure involving the ACL; or knee dislocation.
 
The time of the initial ACL injury was determined from the patient’s history. This included a definite incident of a single twisting injury, with the knee giving away with a ‘pop’ sound, gross knee swelling, and inability to resume the sport or walking. The nature of this injury was further verified with the hospital medical notes, or records of the primary attending physician, when available. Patients were considered potential candidates for ACL reconstruction if any two of the following criteria were satisfied: (1) instability during pivoting movements; (2) signs of ACL deficiency, including a positive Lachman test, anterior drawer test, or a positive pivot shift test; and (3) evidence of an ACL tear on magnetic resonance imaging (MRI).
 
The presence of cartilage injuries and meniscal lesions was confirmed in the operating room by means of knee arthroscopy. Several independent variables were studied: patient sex, age at the time of surgery, surgical delay (defined as the duration in months between the index ACL injury and reconstruction), and causes of ACL injury.
 
Statistical analyses
Data analysis was performed using the Statistical Package for the Social Sciences (Windows version 15.0; SPSS Inc, Chicago [IL], US). Student’s t test was used to compare the means of the age. Mann-Whitney U test was used to compare the means of the length of surgical delay. Fisher’s exact test was used to evaluate the categorical variables. Binary logistic regression was used to calculate the independent effects of individual factors. A P value of <0.05 was considered to be statistically significant.
 
Results
Of 672 patients who received ACL reconstruction, 79 were excluded (7 with concomitant high-grade ligament deficiency, and 72 with revision ACL surgery) and 593 patients were considered for analysis. These included 483 (81%) males and 110 (19%) females. There were 297 (50%) right and 296 (50%) left knees. Their mean age at the time of surgery was 26 years (range, 13-51 years), and their median length of surgical delay was 10.5 months (range, 0.4-241.8 months).
 
Most of the patients had had their injuries during sports activities (89.5%), with soccer (n=226, 42.6%) and basketball (n=163, 30.7%) being the two most common sports (Tables 1 and 2). The age distribution of patients having meniscal and cartilage injuries is shown in Table 3. The incidence of intra-articular lesions in different sports activities leading to injury is shown in Table 4.
 

Table 1. Causes of injury
 

Table 2. Type of sports activity causing anterior cruciate ligament tear
 

Table 3. Age distribution of patients who had meniscal and cartilage injuries
 

Table 4. The incidence of intra-articular lesions in different sports activities leading to injury
 
Meniscus injuries were identified in 315 (53.1%) patients. There were 146 (24.6%) isolated lateral tears, 123 (20.7%) isolated medial tears, and 46 (7.8%) bilateral tears.
 
Patients older than 30 years were more likely to suffer from meniscal injury versus those younger than 30 years (60% vs 51%; P=0.043 by Fisher’s exact test). Longer surgical delay was observed in patients with meniscal lesions versus those without such lesions (median, 12.3 months vs 9.1 months; P=0.021 by Mann-Whitney U test). Also, patients with medial meniscal tear had a longer surgical delay than those with lateral meniscal tear (median, 16.7 months vs 9.0 months; P<0.001, Mann-Whitney U test). However, no significant associations were observed between sex, causes of injury, type of sports, and presence of meniscal lesions.
 
Overall, 139 cartilage lesions were identified in 109 (18.4%) patients. There were 16 patella (11.5%) lesions, 92 (66.2%) femoral condyle lesions, and 31 (22.3%) tibial plateau lesions. Patients with cartilage lesions were significantly older than those without the lesions (mean, 27.6 years vs 25.1 years; P=0.034 by Student’s t test). Female patients were less likely to suffer from chondral injuries than male patients (10.9% vs 20.1%; P=0.028 by Fisher’s exact test). Female sex was found to be independently associated with incidence of cartilage injury in binary logistic regression (P=0.029; odds ratio [OR]=0.475; 95% confidence interval [CI], 0.243-0.929) [Table 5]. Presence of meniscal tear was associated with a 3-fold increased risk of cartilage lesions (P<0.001 by Fisher’s exact test; OR=2.7, 95% CI, 1.7-4.2).
 

Table 5. Binary logistic regression for the factors associated with risk of cartilage injury
 
No significant association, however, was found between surgical delay, causes of injury, type of sports, and cartilage lesions.
 
Discussion
Our study showed that longer surgical delay was present in patients with meniscal lesions, a finding that concurs with data from other published literature. Although Slauterbeck et al,5 Piasecki et al,8 and O’Connor et al9 reported that female patients had a lower rate of meniscus injury than male patients, such association was not observed in our study which recruited a lower proportion of female patients; similar observation was made in the study by Murrell et al.10
 
It is postulated that in acute ACL injury, excessive anterolateral rotation of the tibia on the femur traps the lateral meniscus between the posterolateral aspect of the tibial plateau and the central portion of the lateral femoral condyle. The lateral meniscus is susceptible to a tear when the tibia reduces. However, the scenario is different in patients with chronic ACL deficiency. Recurrent anterior translation of tibia on the femur results in increased stress on the more stably fixed medial meniscus due to the coronary ligaments, leading to a subsequent medial meniscal tear.11 Our study found that ACL-deficient patients with medial meniscus tear had a mean of 9 months longer surgical delay than those with lateral meniscus tear. Mitsou and Vallianatos12 reported that the incidence of medial meniscal tears increased from 17% in patients with ACL reconstruction within 3 weeks of injury to 48% in those who had surgery of more than 6 months after injury; such risk was not observed in lateral meniscus tears. O’Connor et al9 found that patients who underwent ACL reconstruction more than 2 years after injury had only 1.5 times increased risk in lateral meniscus injuries, but 2.2 times increased risk in medial meniscus injuries.
 
In our study, males were found to have higher incidence of cartilage defect than females, but there was no significant difference in terms of meniscal lesions. Slauterbeck et al5 found that male sex was associated with an increased risk of meniscal and chondral lesions in ACL-deficient patients. In a study by Piasecki et al,8 female high-school athletes were found to have fewer meniscal tears (while playing soccer) and a reduced number of intra-articular injuries to the medial femoral condyle while playing basketball, but such associations were not observed among amateur athletes. So far, there has been little research on sex differences in articular cartilage injuries accompanying ACL tears. Granan et al13 reported that cartilage lesions were nearly twice as frequent if there was a meniscal tear, and similar observations were found in our study.
 
The association of age with meniscus tear and cartilage injury with intact ACL is less extensively studied. In a cross-sectional MRI study of nearly 1000 individuals from the general population who were aged 50 to 90 years, 31% of knees were found to have a meniscal tear and the incidence increased with age. It was shown that 21% of the 50- to 59-year-old subjects had a meniscal tear, compared to 46% of subjects aged 70 to 90 years.14 In several large-scale retrospective studies which reviewed the articular cartilage defects during knee arthroscopy, the incidence of isolated chondral lesions without associated intra- and extra-articular knee lesions ranged from 30% to 36.6%.15 16 17 18 No significant statistical associations, however, were found between age and the cartilage lesions.
 
Studies have shown that individuals who participate in vigorous physical activities are more disabled by an ACL injury than those who are relatively sedentary. Paul et al6 reported an association between the mechanism of an ACL injury (jumping and non-jumping) and the incidence of concomitant meniscus injuries, but other authors failed to show such associations. In our study, since more than half of the patients were injured while playing soccer or basketball, an analysis was performed to evaluate if the soccer and basketball players suffered from lesions that were different from those sustained from other causes or during other sports activities. However, type of sports was not associated with any of the parameters we studied. A larger sample including patients with other causes of injury will be needed to prove if there are differences among other sports activities.
 
Another limitation of this study was that patients receiving conservative treatment for their ACL injury were not recruited in the present study. This could lead to potential bias as their risks of meniscal and articular injuries could not be estimated. We are also aware that more sophisticated systems to evaluate the meniscal and chondral lesions, eg the Cooper’s classification19 and the ICRS (International Cartilage Repair Society) classification system,20 could be used to map the lesions, so as to provide more precise anatomical description and details of the lesions.
 
Compared with other studies, which report surgical delay ranging from 1.2 to 13 months,5 6 7 9 10 11 patients in our series had a longer surgical delay. Patients may have postponed the waiting time for surgery or imaging including MRI. It was unclear if patients would suffer from repeated knee injuries, or the activities in which the patients were involved before the surgery would have any effect over the findings of our study.
 
Currently, there is intense debate concerning the optimal timing for ACL reconstruction.21 22 Different surgeons have different personal preferences. Some prefer early surgery while others are in favour of an optimal period of rehabilitation before considering surgery. Frobell et al23 concluded in his randomised controlled trial that “In young, active adults with acute ACL tears, a strategy of rehabilitation plus early ACL reconstruction was not superior to a strategy of rehabilitation plus optional delayed ACL reconstruction.” According to Richmond et al,22 however, Frobell’s conclusion is flawed; they believe that prompt operative intervention reduces long-term osteoarthritis after knee ACL tear. No matter what approach the surgeons prefer, our patients with ACL tear should be well informed about the risks and benefits of conservative management versus surgical reconstruction, so they can make their best decision with the best information on hand.
 
Conclusions
Increased age and surgical delay were associated with meniscal tear in patients with ACL tear, and longer surgical delay was observed in patients with medial meniscal tear. Increased age, male sex, and presence of meniscal tear were all associated with chondral lesions after an ACL tear. Cause of injury or type of sports activity leading to ACL injury was not associated with intra-articular lesions.
 
References
1. Lohmander LS, Englund PM, Dahl LL, Roos EM. The long-term consequence of anterior cruciate ligament and meniscus injuries: osteoarthritis. Am J Sports Med 2007;35:1756-69. Crossref
2. Arendt E, Dick R. Knee injury patterns among men and women in collegiate basketball and soccer. NCAA data and review of literature. Am J Sports Med 1995;23:694-701. Crossref
3. Bjordal JM, Arnly F, Hannestad B, Strand T. Epidemiology of anterior cruciate ligament injuries in soccer. Am J Sports Med 1997;25:341-5. Crossref
4. Messina DF, Farney WC, DeLee JC. The incidence of injury in Texas high school basketball. A prospective study among male and female athletes. Am J Sports Med 1999;27:294-9.
5. Slauterbeck JR, Kousa P, Clifton BC, et al. Geographic mapping of meniscus and cartilage lesions associated with anterior cruciate ligament injuries. J Bone Joint Surg Am 2009;91:2094-103. Crossref
6. Paul JJ, Spindler KP, Andrish JT, Parker RD, Secic M, Bergfeld JA. Jumping versus nonjumping anterior cruciate ligament injuries: a comparison of pathology. Clin J Sport Med 2003;13:1-5. Crossref
7. Tandogan RN, Taşer O, Kayaalp A, et al. Analysis of meniscal and chondral lesions accompanying anterior cruciate ligament tears: relationship with age, time from injury, and level of sport. Knee Surg Sports Traumatol Arthrosc 2004;12:262-70. Crossref
8. Piasecki DP, Spindler KP, Warren TA, Andrish JT, Parker RD. Intraarticular injuries associated with anterior cruciate ligament tear: findings at ligament reconstruction in high school and recreational athletes. An analysis of sex-based differences. Am J Sports Med 2003;31:601-5.
9. O’Connor DP, Laughlin MS, Woods GW. Factors related to additional knee injuries after anterior cruciate ligament injury. Arthroscopy 2005;21:431-8. Crossref
10. Murrell GA, Maddali S, Horovitz L, Oakley SP, Warren RF. The effects of time course after anterior cruciate ligament injury in correlation with meniscal and cartilage loss. Am J Sports Med 2001;29:9-14.
11. Duncan JB, Hunter R, Purnell M, Freeman J. Meniscal injuries associated with acute anterior cruciate ligament tears in alpine skiers. Am J Sports Med 1995;23:170-2. Crossref
12. Mitsou A, Vallianatos P. Meniscal injuries associated with rupture of the anterior cruciate ligament: a retrospective study. Injury 1988;19:429-31. Crossref
13. Granan LP, Bahr R, Lie SA, Engebretsen L. Timing of anterior cruciate ligament reconstructive surgery and risk of cartilage lesions and meniscal tears: a cohort study based on the Norwegian National Knee Ligament Registry. Am J Sports Med 2009;37:955-61. Crossref
14. Englund M, Guermazi A, Gale D, et al. Incidental meniscal findings on knee MRI in middle-aged and elderly persons. N Engl J Med 2008;359:1108-15. Crossref
15. Arøen A, Løken S, Heir S, et al. Articular cartilage lesions in 993 consecutive knee arthroscopies. Am J Sports Med 2004;32:211-5. Crossref
16. Curl WW, Krome J, Gordon ES, Rushing J, Smith BP, Poehling GG. Cartilage injuries: a review of 31,516 knee arthroscopies. Arthroscopy 1997;13:456-60. Crossref
17. Hjelle K, Solheim E, Strand T, Muri R, Brittberg M. Articular cartilage defects in 1,000 knee arthroscopies. Arthroscopy 2002;18:730-4. Crossref
18. Widuchowski W, Widuchowski J, Trzaska T. Articular cartilage defects: study of 25,124 knee arthroscopies. Knee 2007;14:177-82. Crossref
19. Cooper DE, Arnoczky SP, Warren RF. Meniscal repair. Clin Sports Med 1991;10:529-48.
20. Brittberg M, Winalski CS. Evaluation of cartilage injuries and repair. J Bone Joint Surg Am 2003;85-A Suppl 2:58-69.
21. Bernstein J. Early versus delayed reconstruction of the anterior cruciate ligament: a decision analysis approach. J Bone Joint Surg Am 2011;93:e48. Crossref
22. Richmond JC, Lubowitz JH, Poehling GG. Prompt operative intervention reduces long-term osteoarthritis after knee anterior cruciate ligament tear. Arthroscopy 2011;27:149-52. Crossref
23. Frobell RB, Roos EM, Roos HP, Ranstam J, Lohmander LS. A randomized trial of treatment for acute anterior cruciate ligament tears. N Engl J Med 2010;363:331-42. Crossref

Comparison of efficacy and tolerance of short-duration open-ended ureteral catheter drainage and tamsulosin administration to indwelling double J stents following ureteroscopic removal of stones

Hong Kong Med J 2015 Apr;21(2):124–30 | Epub 10 Mar 2015
DOI: 10.12809/hkmj144292
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Comparison of efficacy and tolerance of short-duration open-ended ureteral catheter drainage and tamsulosin administration to indwelling double J stents following ureteroscopic removal of stones
Vikram S Chauhan, MB, BS, MS (Surgery)1; Rajeev Bansal, MB, BS, MS (Surgery)1; Mayuri Ahuja, MB, BS, DGO2
1 School of Medical Sciences & Research, Sharda University, Greater Noida (U.P.) 201306, India
2 Kokila Dhirubhai Ambani Hospital & Medical Research Institute, Andheri West, Mumbai 400053, India
Corresponding author: Dr Vikram S Chauhan (vsing73@rediffmail.com)
 Full paper in PDF
Abstract
Objectives: To evaluate the efficacy of short-duration, open-ended ureteral catheter drainage as a replacement to indwelling stent, and to study the effect of tamsulosin on stent-induced pain and storage symptoms following uncomplicated ureteroscopic removal of stones.
 
Design: Prospective randomised study.
 
Setting: School of Medical Sciences and Research, Sharda University, Greater Noida, India.
 
Patients: Patients who underwent ureteroscopic removal of stones for lower ureteral stones between November 2011 and January 2014 were randomly assigned into three groups. Patients in group 1 (n=33) were stented with 5-French double J stent for 2 weeks. Patients in group 2 (n=35) were administered tablet tamsulosin 0.4 mg once daily for 2 weeks in addition to stenting, and those in group 3 (n=31) underwent 5-French open-ended ureteral catheter drainage for 48 hours.
 
Main outcome measures: All patients were evaluated for flank pain using visual analogue scale scores at days 1, 2, 7, and 14, and for storage (irritative) bladder symptoms using International Prostate Symptom Score on days 7 and 14, and for quality-of-life score (using International Prostate Symptom Score) on day 14.
 
Results: Of the 99 patients, visual analogue scale scores were significantly lower for groups 2 and 3 (P<0.0001). The International Prostate Symptom Scores for all parameters were lower in patients from groups 2 and 3 compared with group 1 both on days 7 and 14 (P<0.0001). Analgesic requirements were similar in all three groups.
 
Conclusion: Open-ended ureteral catheter drainage is equally effective and better tolerated than routine stenting following uncomplicated ureteroscopic removal of stones. Tamsulosin reduces storage symptoms and improves quality of life after ureteral stenting.
 
 
New knowledge added by this study
  •  This study shows that short-duration (up to 48 hours) ureteral drainage following ureteroscopic removal of stones (URS) has better efficacy and tolerance than indwelling stent placement with respect to the need for postoperative drainage. Hence, this can be a replacement for double J stenting.
  • Routine tamsulosin administration in patients with indwelling stents following URS has beneficial effects not only on irritative bladder symptoms but also on flank pain (both persistent and voiding).
Implications for clinical practice or policy
  •  Replacement of stents with short-duration open-ended ureteral catheter drainage provides early and more rehabilitation to the patients following URS. This is a viable option because there is no need for follow-up for stent-related symptoms, or maintaining records for planning its removal (no lost or retained stents).
  • It avoids a second invasive endoscopic procedure of stent removal, thereby reducing the medical and financial burden on the patient (especially important in developing countries). Patients are more likely to undergo URS again if required in the future (with stone recurrence) than opt for less effective or expensive choices like medical management, shock wave lithotripsy, or alternative forms of medicine.
  • In stented patients, tamsulosin administration improves the overall quality of life, and makes the period with stent in situ more bearable and asymptomatic.
 
 
Introduction
Ureteroscopic removal of stones (URS) is the standard endoscopic method for treatment of lower ureteric calculi. In recent times, this procedure does not require routine dilatation of ureteric orifice due to the availability of small-calibre rigid ureteroscopes that can be easily manipulated into the ureter in most of the cases.
 
Once the stones are removed, an indwelling ureteral double J stent is placed which remains in situ postoperatively for a period of 2 to 4 weeks. This is dependent on a variety of factors such as the difficulty in removal of stones, any mucosal injury, and associated stricture of the ureter or its meatus. Finney1 was the first to describe the use of double J stents in the year 1978.2 The use of stents has proved to be beneficial as seen in various studies, because they prevent or reduce the occurrence of ureteric oedema, clot colic, and subsequent development of secondary ureteric stricture in cases with mucosal injury or difficult stones.3 4 5 However, the use of ureteral stents is not without its attendant complications. Patients may develop flank pain, haematuria, clot retention, dysuria, frequency, and other irritative bladder symptoms following stent placement in the postoperative period. Hence, many authors have questioned the need for routine placement of stents or their early removal.6 Recently, researchers have proposed that the irritative and other symptoms due to stents can be reduced or overcome by the use of alpha blockers.7 With this background knowledge, we conducted a prospective randomised study with the aim to assess the efficacy of oral tamsulosin for 14 days following stenting, and efficacy of an open-ended ureteral catheter for 48 hours instead of a stent as viable options in patients who underwent uncomplicated URS for lower ureteric stones.
 
Methods
This study was conducted at School of Medical Sciences and Research, Sharda University, Greater Noida, India, after obtaining due clearance from the ethics committee. Recruitment of patients was done over a period from November 2011 to January 2014 and included a total of 99 patients who underwent URS for lower ureteric stones.
 
Inclusion criteria were lower ureteric stones defined as those imaged below the lower border of sacroiliac joint of up to 10 mm in diameter on computed tomography. Stones larger than 10 mm in diameter, presence of ipsilateral kidney stones, cases with lower ureteric or meatal stricture requiring dilatation, and cases which had significant mucosal injury (flap formation) per-operatively were excluded.
 
All patients underwent URS under spinal anaesthesia using an 8-French rigid ureteroscope, and stones requiring fragmentation were broken with a pneumatic lithoclast and these fragments were retrieved with forceps. One surgeon performed all the interventional procedures during the study period.
 
The patients were randomly assigned to three groups using randomisation table. On the random number table, we chose an arbitrary place to start and then read towards the right of the table from that number. We used a number read on the table from 1 to 3 to assign cases to group 1, a number from 4 to 6 to assign to group 2, and a number from 7 to 9 to assign cases to group 3 (a value of 0 was ignored). A duty doctor prepared 120 serially numbered slips of papers (indicating the number of enrolment) by following the above randomisation protocol and had written in them the group to which a new case was to be assigned. The chits were folded, stapled, and stacked in a box and stored in the operating theatre. After completion of the URS, the floor nurse opened the chit to reveal the appropriate enrolment number and the group (group 1, 2 or 3) to which the patient would go, thereby deciding further intervention.
 
Patients in group 1 underwent double J stent placement following URS for a period of 2 weeks. Patients in group 2 were administered tablet tamsulosin 0.4 mg once daily for 2 weeks in addition to double J stent. Patients in group 3 underwent placement of an open-ended 5-French ureteral catheter following the URS procedure, the distal end of which was introduced into the lumen of Foley catheter. Both the ureteric and Foley catheter were removed on the second postoperative day in group 3 patients.
 
A 5-French 25-cm double J stent was used for stenting and the duration of surgery was recorded as time from the introduction of ureteroscope to the placement of Foley catheter. Postoperatively, patients were assessed for flank pain (persistent or voiding) by asking them to report the pain on a visual analogue scale (VAS) of 0 to 10 (0 being no pain and 10 pain as severe as it could be) on postoperative days 1, 2, 7, and 14. Patients were also asked to report storage symptoms using the International Prostate Symptom Score (IPSS) at 1 and 2 weeks postoperatively to assess irritative bladder symptoms, while the IPSS quality-of-life index was assessed at 2 weeks postoperatively. All stented patients were discharged with tablet levofloxacin 250 mg orally once daily for 2 weeks as suppressive prophylaxis for infection.
 
Patients who had an indwelling double J stent underwent stent removal after 2 weeks by cystoscopy under local anaesthesia using 2% lidocaine jelly supplemented with intravenous injection of pentazocine 30 mg on a patient-need basis, and were asked to report the pain experienced during the stent removal on a VAS. Administration and reporting of VAS scores was done by the floor manager (administrative personnel) with assistance from nurse on duty for the in-patients (wards), while an intern and nurse on duty for out-patients on follow-up was done in local language (Hindi). All of these staff assessing VAS were blinded and had no direct influence or active role in the treatment or assessment protocol.
 
All patients on completion of 2 weeks of surgery were asked, “Whether you would opt for the same procedure again as treatment if you develop ureteral stones in the future?” Patients complaining of pain postoperatively were given injection tramadol 50 mg intravenously if needed. If pain persisted, patients were given intravenous injection of pentazocine 30 mg. All patients underwent intravenous urography after 1 month of procedure to document stone clearance and development of ureteral stricture. Patients were asked to report to the out-patient department if any other complications occurred following discharge.
 
The sample size was estimated with the following logic. We assumed the margin of error that could be accepted as 5%, with a confidence level of 90% and population size of 45 (cases that were admitted with flank pain and require URS for stones), in our institution the number of cases who undergo URS typically in a year would be roughly around 45 to 50. Assuming the response distribution to be 50%, with the above assumptions, the sample size calculated was 39, using the following formula:
Sample size n and margin of error E are given by
x = Z(c/100)2r(100 - r)
n = N x/((N - 1)E2 + x)
E = Sqrt[(N - n)x/n(N - 1)]
where N is the population size, r is the fraction of responses that we are interested in, and Z(c/100) is the critical value for the confidence level c.
 
This calculation is based on the normal distribution, and assumes that there are more than 30 samples and a power of 80. Hence, we chose to recruit approximately 35 patients in each arm of study.
 
Statistical analyses
After collation of data, Student’s t test and Pearson Chi squared test were used to analyse the three groups for age, sex, stone size, and operating time. We also comparatively evaluated the severity of flank pain on postoperative days 1, 2 and weeks 1 and 2, and the IPSS for each group regarding storage symptoms, total IPSSs at postoperative weeks 1 and 2, and the quality-of-life index at 2 weeks. Results from groups 2 and 3 were compared with group 1 to draw conclusions. Fisher’s exact test and Pearson Chi squared tests were used to compare the number of patients who needed intravenous analgesics due to severe postoperative pain and to examine the response to our question, “Whether you would opt for the same procedure again as treatment if you develop ureteral stones in the future?”
 
Results
There was no significant variation in the three groups with regard to variables like age, sex, stone size, and operating time (Table 1). The VAS score for flank pain, however, showed significant differences among the three groups. On postoperative day 1, the mean (± standard deviation) VAS scores in groups 1, 2, and 3 were 2.73 ± 1.14, 2.34 ± 1.12, and 2.35 ± 0.86 respectively, but were not statistically significant (groups 1 and 2, P=0.17; groups 1 and 3, P=0.15). On day 7, the mean VAS scores for groups 2 and 3 were 0.97 ± 0.77 and 1.00 ± 0.72 respectively, which were significantly lower than group 1 score of 2.85 ± 1.52 (P<0.0001). On day 14, the mean VAS scores for groups 1, 2, and 3 were 2.48 ± 1.40, 0.66 ± 0.67, and 0.55 ± 0.56 respectively (P<0.0001). This amounted to significantly greater pain in group 1 patients as compared with those in groups 2 and 3 (for groups 1-2 and 1-3, P<0.0001; Fig 1). Among those stented, the mean VAS score for stent removal using 2% lidocaine jelly was 3.76 ± 1.55 but the mean VAS score for stent removal with regard to sex (male:female = 36:32) was 4.97 ± 0.80 and 2.41 ± 0.96, respectively and this was statistically significant (P<0.0001).
 

Table 1. General characteristics of study patients
 

Figure 1. Visual analogue scale (VAS) scores on postoperative day 14
 
Analyses of IPSS on both postoperative days 7 and 14 for bladder sensation, frequency, urgency, nocturia, and the sum total of IPSS showed there was significant decrease in group 2 as compared with group 1 for all four parameters (P<0.0001). Group 3 patients had minimal mean IPSS scores to begin with (Table 2). The mean quality-of-life scores for groups 1, 2, 3 were 4.00 ± 0.92, 1.37 ± 0.86, and 0.52 ± 0.50 respectively, and this was significantly better for groups 2 and 3 compared with group 1 (P<0.00001; Fig 2 and Table 3).
 

Table 2. Mean (± standard deviation) International Prostate Symptom Scores (IPSS) according to groups on postoperative day 7
 

Figure 2. International Prostate Symptom Score (IPSS) and quality-of-life score on postoperative day 14
 

Table 3. Pain requiring analgesia, quality of life, and willingness to opt for the same procedure again with stone recurrence among the groups
 
Nine patients in group 1, 11 in group 2, and seven in group 3 complained of pain requiring injection of tramadol 50 mg (Table 3). Only one patient (stent-only group) further required intravenous injection of pentazocine 30 mg due to persistent pain. No patient in any group required intravenous analgesic after day 2 making analgesic need similar in all groups. One patient who was stented and had not received tamsulosin reported gross haematuria on the sixth day, which required readmission and catheterization with bladder wash, and the haematuria responded to conservative treatment. Beyond the 2-week period, no patient reported any other complication during the 2-month follow-up.
 
In this study, 20, 29, and all patients in groups 1, 2, and 3 respectively showed willingness for undergoing same procedure in future if needed. This showed that a higher percentage of patients in groups 2 and 3 were willing for repeated surgery (if needed) than in group 1, which was statistically significant (for groups 1-2, P=0.04, and for groups 1-3, P=0.0003; Table 3). Two patients from the open drainage group were lost to follow-up after 7 days. There was no crossover from one group to the other once assigned.
 
Discussion
Indwelling double J stents are routinely placed following URS to prevent flank pain and secondary ureteral strictures.4 8 9 However, duration-dependent symptoms due to ureteral stents have been well documented. Pollard and Macfarlane10 reported stent-related symptoms in 18 (90%) out of 20 patients who had indwelling ureteral stents following URS. Bregg and Riehle11 reported that symptoms such as gross haematuria (42%), dysuria (26%), and flank pain (30%) appeared in stented patients prior to being taken up for shock wave lithotripsy. Stoller et al8 documented ureteral stent–related symptoms, like flank pain, frequency, urgency, and dysuria, in at least 50% of patients who had an indwelling ureteral stent. In a series by Han et al,12 haematuria was reported as the most common symptom (69%) followed by dysuria (45.8%), frequency (42.2%), lower abdominal pain during voiding (32.2%), and flank pain (25.4%). Most studies report that apart from urgency and dysuria (which improve with time), there is no relief in other symptoms till the stent is removed.
 
Wang et al7 showed that administration of α-blocker (tamsulosin) in stented patients improves flank pain and IPSS storage symptoms, along with an overall improvement in quality of life. They reported mean scores of frequency, urgency, nocturia as 3.7, 3.82, 2.01 in stented patients and 1.55, 1.43, 0.65 in those who received tamsulosin for 2 weeks, respectively. The mean score of quality of life in IPSS was 4.21 in stented group and 1.6 in stented + tamsulosin group. Moon et al13 reported that when compared with stenting, all the storage categories of the IPSS were significantly lower in the 1-day ureteral stent group (P<0.01). Although the VAS scores were not significantly different on postoperative day 1, it was significantly lower in the 1-day ureteral catheter group on postoperative days 7 and 14 (P<0.01).13
 
In our study, the mean total IPSS score at 2 weeks postoperatively was 9.64, 1.71, and 0.13 for groups 1, 2, and 3 respectively (Fig 2). We also found that the mean VAS scores for flank pain and the mean IPSS scores of bladder sensation, frequency, urgency, nocturia, were significantly higher in patients in group 1 when compared with groups 2 and 3 (Figs 1 and 2). These findings suggest that the indwelling double J stent causes time-dependent pain and storage symptoms due to persistent bladder irritation and administration of tamsulosin did significantly decrease symptoms. Our patients who received tamsulosin also fared much better on the quality-of-life index at both 1 and 2 weeks postoperatively than the group with stent placement only (mean score, 1.37 and 4.00 respectively), while those who underwent open-ended catheter drainage showed minimal irritative symptoms (Table 2).
 
In addition, removal of indwelling stent constitutes an additional procedure, which not only is physical but also a financial burden to the patient especially in a developing country like India. Kim et al14 evaluated pain that occurred on cystoscopy following an intramuscular injection of diclofenac 90 mg. The mean score of VAS during the procedure was 7.8 ± 0.7, which indicated severe pain. In addition, only 22.5% of patients responded “yes” to a questionnaire about their willingness to submit to the same procedure again.14 Moon et al13 reported a mean VAS score of 4.96 ± 1.29 for stent removal using lidocaine gel. Although the mean VAS score for stent removal under local anaesthesia in our series was 3.76, the mean for males and females was 4.97 and 2.41, respectively. This amounts to moderately severe pain in males, and in association with irritative bladder symptoms that could influence the patient’s willingness to go for a repeated procedure in future if required. Besides, manipulation during the procedure to remove the stent under local anaesthesia especially in males could lead to urethral or bladder injuries, a drawback that Hollenbeck et al15 have observed.
 
Many have questioned the need for ureteral stenting following URS. Denstedt et al16 in a series of 58 patients who underwent URS (29 stented and 29 non-stented) reported that there was no significant difference in complications or success rates for URS between stented versus non-stented cases. However, Djaladat et al17 reported that when ureteroscopy was performed without catheterization, flank pain and renal colic could result from early ureteral oedema implying that some postoperative drainage is better than no drainage at all. This formed the premise of using the open-ended ureteral catheter in immediate postoperative period in our series and the significantly lower VAS scores suggest that their placement can be as effective as stents with minimal irritative symptoms.17 Nabi et al18 concluded that there was no significant difference in postoperative requirements for analgesia, urinary tract infection, the stone-free rate, or ureteric stricture formation in patients who underwent uncomplicated URS. There was no significant difference in analgesic requirement in the three groups in our study; 9, 11, and 7 patients in groups 1, 2, and 3 respectively required intravenous tramadol on postoperative days 1 and 2, only one patient in group 1 needed further analgesia. No patient needed analgesics beyond the second postoperative day which is comparable to the series by Moon et al13 who reported that ratio of patients who needed intravenous analgesics because of severe postoperative flank pain was not significantly different between stented and open-drainage groups.
 
In our study, 20 out of 33 in group 1, 29 out of 35 in group 2, and all 31 patients in group 3 responded affirmatively when asked “Whether you would opt for the same procedure again as treatment if you develop ureteral stones in the future?” The P values for willingness for repeated procedure were 0.04 and 0.0003 when comparing groups 1-2 and 1-3 respectively, which is in line with another study (willingness P=0.02 in favour of open-ended drainage).13 The results show that patients in groups 2 and 3 (tamsulosin and open-catheter drainage) were significantly more likely to accept a repeated procedure if needed. Hence, it can be inferred that administration of tamsulosin following stenting or placement of open-ended catheter (removed on day 2) was better tolerated by patients compared with an indwelling stent–only procedure.
 
The relatively small sample size and being unblinded which was a likely placebo effect in the tamsulosin group were the most obvious limitations in our study. We believe that since in the stented group patients were given tablet levofloxacin 250 mg as suppressive prophylaxis post-discharge, any relief in lower urinary tract symptoms therefore could not be attributed to tamsulosin alone as placebo effect. Assessment of VAS was done by personnel who were blinded and had no direct influence on the treatment or assessment protocol; this ruled out surgeons’ bias and their involvement in influencing the patient’s reporting of VAS scores. Degree of difficulty, complexity, and duration of the procedure could be construed as confounding factors in the study. However, the relatively simple inclusion and exclusion criteria which included all but the absolute indications for stenting for comparison obviate this and the results demonstrate that open-ended short-duration ureteral drainage can replace stenting in all other scenarios.
 
Conclusion
Accepting the limitations of a smaller sample size, open-ended catheter drainage for 2 days is better tolerated for flank pain and irritative bladder symptoms when compared with an indwelling double J stent for 2 weeks, without any significant difference in complications or efficacy. We recommend this procedure as a viable replacement to routine stenting following URS. In those patients who do undergo stenting following URS, administration of tamsulosin significantly reduces stent-related flank pain and irritative symptoms and enhances the overall quality of life. In view of the possible placebo effect on patients in group 2, the results show that there is a need for more exhaustive and larger multicentre randomised controlled trials to assess the role of tamsulosin in countering post-URS stenting symptoms, given its wide acceptance for pain relief and stone passage in treating lower ureteral stones.
 
Declaration
No conflicts of interest were declared by authors.
 
References
1. Finney RP. Experience with new double J ureteral catheter stent. J Urol 1978;120:678-81.
2. Hepperlen TW, Mardis HK, Kammandel H. Self-retained internal ureteral stents: a new approach. J Urol 1978;119:731-4.
3. Lee JH, Woo SH, Kim ET, Kim DK, Park J. Comparison of patient satisfaction with treatment outcomes between ureteroscopy and shock wave lithotripsy for proximal ureteral stones. Korean J Urol 2010;51:788-93. Crossref
4. Harmon WJ, Sershon PD, Blute ML, Patterson DE, Segura JW. Ureteroscopy: current practice and long-term complications. J Urol 1997;157:28-32. Crossref
5. Boddy SA, Nimmon CC, Jones S, et al. Acute ureteric dilatation for ureteroscopy. An experimental study. Br J Urol 1988;61:27-31. Crossref
6. Hosking DH, McColm SE, Smith WE. Is stenting following ureteroscopy for removal of distal ureteral calculi necessary? J Urol 1999;161:48-50. Crossref
7. Wang CJ, Huang SW, Chang CH. Effects of tamsulosin on lower urinary tract symptoms due to double-J stent: a prospective study. Urol Int 2009;83:66-9. Crossref
8. Stoller ML, Wolf JS Jr, Hofmann R, Marc B. Ureteroscopy without routine balloon dilation: an outcome assessment. J Urol 1992;147:1238-42.
9. Netto Júnior NR, Claro Jde A, Esteves SC, Andrade EF. Ureteroscopic stone removal in the distal ureter. Why change? J Urol 1997;157:2081-3. Crossref
10. Pollard SG, Macfarlane R. Symptoms arising from Double-J ureteral stents. J Urol 1988;139:37-8.
11. Bregg K, Riehle RA Jr. Morbidity associated with indwelling internal stents after shock wave lithotripsy. J Urol 1989;141:510-2.
12. Han CH, Ha US, Park DJ, Kim SH, Lee YS, Kang SH. Change of symptom characteristics with time in patients with indwelling double-J ureteral stents. Korean J Urol 2005;46:1137-40.
13. Moon KT, Cho HJ, Cho JM, et al. Comparison of an indwelling period following ureteroscopic removal of stones between Double-J stents and open-ended catheters: a prospective, pilot, randomized, multicenter study. Korean J Urol 2011;52:698-702. Crossref
14. Kim KS, Kim JS, Park SW. Study on the effects and safety of propofol anaesthesia during cytoscopy. Korean J Urol 2006;47:1230-5. Crossref
15. Hollenbeck BK, Schuster TG, Faerber GJ, Wolf JS Jr. Routine placement of ureteral stents is unnecessary after ureteroscopy for urinary calculi. Urology 2001;57:639-43. Crossref
16. Denstedt JD, Wollin TA, Sofer M, Nott L, Weir M, D’A Honey RJ. A prospective randomized controlled trial comparing nonstented versus stented ureteroscopic lithotripsy. J Urol 2001;165:1419-22. Crossref
17. Djaladat H, Tajik P, Payandemehr P, Alehashemi S. Ureteral catheterization in uncomplicated ureterolithotripsy: a randomized, controlled trial. Eur Urol 2007;52:836-41. Crossref
18. Nabi G, Cook J, N’Dow J, McClinton S. Outcomes of stenting after uncomplicated ureteroscopy: systematic review and meta-analysis. BMJ 2007;334:572. Crossref

Surveillance of emerging drugs of abuse in Hong Kong: validation of an analytical tool

Hong Kong Med J 2015 Apr;21(2):114–23 | Epub 10 Mar 2015
DOI: 10.12809/hkmj144398
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Surveillance of emerging drugs of abuse in Hong Kong: validation of an analytical tool
Magdalene HY Tang, PhD1; CK Ching, FRCPA, FHKAM (Pathology)1; ML Tse, FHKCEM, FHKAM (Emergency Medicine)2; Carol Ng, BSW, MA3; Caroline Lee, MSc1; YK Chong, MB, BS1; Watson Wong, MSc1; Tony WL Mak, FRCPath, FHKAM (Pathology)1; Emerging Drugs of Abuse Surveillance Study Group
1 Toxicology Reference Laboratory, Hospital Authority, Hong Kong
2 Hong Kong Poison Information Centre, Hospital Authority, Hong Kong
3 Hong Kong Lutheran Social Service, the Lutheran Church – Hong Kong Synod, Homantin, Hong Kong
Corresponding author: Dr Tony WL Mak (makwl@ha.org.hk)
 Full paper in PDF
Abstract
Objective: To validate a locally developed chromatography-based method to monitor emerging drugs of abuse whilst performing regular drug testing in abusers.
 
Design: Cross-sectional study.
 
Setting: Eleven regional hospitals, seven social service units, and a tertiary level clinical toxicology laboratory in Hong Kong.
 
Participants: A total of 972 drug abusers and high-risk individuals were recruited from acute, rehabilitation, and high-risk settings between 1 November 2011 and 31 July 2013. A subset of the participants was of South Asian ethnicity. In total, 2000 urine or hair specimens were collected.
 
Main outcome measures: Proof of concept that surveillance of emerging drugs of abuse can be performed whilst conducting routine drug of abuse testing in patients.
 
Results: The method was successfully applied to 2000 samples with three emerging drugs of abuse detected in five samples: PMMA (paramethoxymethamphetamine), TFMPP [1-(3-trifluoromethylphenyl)piperazine], and methcathinone. The method also detected conventional drugs of abuse, with codeine, methadone, heroin, methamphetamine, and ketamine being the most frequently detected drugs. Other findings included the observation that South Asians had significantly higher rates of using opiates such as heroin, methadone, and codeine; and that ketamine and cocaine had significantly higher detection rates in acute subjects compared with the rehabilitation population.
 
Conclusions: This locally developed analytical method is a valid tool for simultaneous surveillance of emerging drugs of abuse and routine drug monitoring of patients at minimal additional cost and effort. Continued, proactive surveillance and early identification of emerging drugs will facilitate prompt clinical, social, and legislative management.
 
 
New knowledge added by this study
  •  A locally developed method is a valid tool for monitoring the penetrance of emerging drugs of abuse into our society whilst performing regular drugs of abuse testing.
Implications for clinical practice or policy
  •  Implementation of the analytical method in the routine drug monitoring of drug abusers will enable simultaneous surveillance of novel drugs of abuse at minimal extra cost and effort.
  •  Continued and proactive surveillance of emerging drugs of abuse in the population will facilitate prompt measures in the clinical, social, and legislative management of these constantly changing and potentially dangerous drugs.
 
 
Introduction
Despite continuous efforts, drug abuse remains a major social and medical problem in today’s society. In particular, there has been a rapid and continued growth of ‘emerging’ drugs of abuse (DOA) on a global scale.1 2 Emerging DOA, also called designer drugs or novel psychoactive substances, bear a chemical and/or pharmacological resemblance to conventional DOA and pose a threat to public health, but are often (initially) not controlled by law. They are easily accessible from street dealers or through the internet, and are often presumed to be safer than conventional DOA owing to their ‘legal’ or ‘herbal’ nature.1 3 In Hong Kong, the drug scene has also been penetrated in recent years by such substances as the piperazine derivative TFMPP [1-(3-trifluoromethylphenyl)piperazine],4 the synthetic cannabinoids,5 the methamphetamine derivative PMMA (paramethoxymethamphetamine),6 and the NBOMe (N-methoxybenzyl derivatives of phenethylamine).7 Some of these novel drugs pose a significant health threat and numerous fatalities have been reported worldwide.8 9 10 In particular, PMMA and the NBOMe drugs have been associated with severe clinical toxicity and fatalities in Hong Kong.6 7
 
Effective diagnosis and treatment of emerging DOA intoxication rely on the timely and accurate detection of these substances. Whilst immunoassay and drug screening methods are well-established for conventional DOA, laboratory analysis of novel drugs is not so readily available. This inevitably leads to the delayed discovery of emerging drugs and consequently early medical and social intervention is compromised. Recently, a liquid chromatography–tandem mass spectrometry (LC-MS/MS)–based method has been established locally that allows the simultaneous detection of 47 commonly abused drugs in addition to over 45 emerging DOA and their metabolites in urine11 and hair (the latter manuscript in preparation). The aim of the current study was to validate this analytical method as a tool to monitor emerging DOA whilst performing regular DOA testing by applying the method to 2000 urine and hair specimens collected from drug abusers as well as high-risk individuals.
 
Methods
Sample collection
Between 1 November 2011 and 31 July 2013, 964 urine and 1036 hair specimens (n=2000 in total) were collected for analysis. Subjects who were included in the study were patients/clients of the units listed, and who were suspected to be actively using DOA and who agreed to participate in the study: (i) substance abuse clinics within the Hospital Authority (Castle Peak Hospital, Kowloon Hospital, Kwai Chung Hospital, Pamela Youde Nethersole Eastern Hospital, Prince of Wales Hospital, Queen Mary Hospital); (ii) accident and emergency (A&E) departments within the Hospital Authority (Pamela Youde Nethersole Eastern Hospital, Pok Oi Hospital, Princess Margaret Hospital, Queen Mary Hospital, Tuen Mun Hospital, United Christian Hospital, Yan Chai Hospital); (iii) the Hong Kong Poison Information Centre (HKPIC) toxicology clinic; (iv) counselling centres for psychotropic substance abusers (CCPSA; Evergreen Lutheran Centre, Rainbow Lutheran Centre, Cheer Lutheran Centre); (v) various rehabilitation centres including the Society of Rehabilitation and Crime Prevention (SRACP), Operation Dawn and Caritas Wong Yiu Nam Centre; and (vi) Youth Outreach. Pregnant women and individuals aged under 18 years were excluded from the study. The majority of the participants were Chinese, although those recruited from SRACP were exclusively South Asians.
 
The study was approved by the institutional ethics review boards (Kowloon West Cluster: KW/FR-11-011 (41-05); Kowloon Central/Kowloon East Cluster: KC/KE-11-0170/ER-2; Hong Kong West Cluster: UW 11-398; Hong Kong East Cluster: HKEC-2011-068; New Territories West Cluster: NTWC/CREC/989/11; New Territories East Cluster: CRE-2011.427). Subjects donated samples on a voluntary basis and informed consent was obtained. Each subject donated either urine or hair, or both, at each donation episode. Some gave repeated sample(s): donations were at least 8 weeks apart. Urine was collected in a plain plastic bottle and frozen until analysis. For hair, a lock of hair was collected from the back of the head for analysis. The root end was identified to facilitate segmental analysis.
 
Sample analysis
The methodology for urine analysis has been detailed in a separate publication.11 In brief, the urine sample was subjected to an initial glucuronidase digestion, followed by solid phase extraction and sample concentration. The hair sample (first 3-cm segment) was first decontaminated and subsequently subjected to simultaneous micro-pulverisation and extraction in solvent. The final filtrates were analysed by LC-MS/MS performed on an Agilent 6430 triple-quadrupole mass spectrometer (Agilent Technologies, Singapore) coupled with Agilent 1290 Infinity liquid chromatography system. The 47 conventional and 47 emerging DOA identified for analysis are listed in Table 1. The analytical method had previously been validated according to international guidelines.12
 

Table 1. The conventional and emerging drugs of abuse being analysed for
 
Statistical analysis
Statistical analysis was performed using Fisher’s exact test, with a P value of less than 0.05 considered statistically significant. Comparison of the drug detection rates was made between (i) different ethnic groups and (ii) samples collected in the rehabilitation and acute settings.
 
Results
Subject demographics
In total, 972 individuals took part in the study (720 males, 252 females). Their respective mean and median age was 35 and 33 years (range, 18-74 years). Of the 972 subjects, 815 were single-time donors and 157 donated repeated samples (between 2 and 6 donations each). There were 1224 donation episodes (815 from single-time donors; 409 from repeated donors) and 2000 specimens collected in total, of which 964 were urine and 1036 were hair (Fig 1). Of the 1224 donation episodes, the subjects were recruited from: substance abuse clinics (n=822), drug rehabilitation and counselling centres (n=320), youth hangout centre (n=41), HKPIC toxicology clinic (n=28), and A&E departments (n=13).
 

Figure 1. Subject demographics
 
Emerging drugs of abuse
In the 2000 specimens analysed, five specimens were found to contain three emerging DOAs: PMMA, TFMPP, and methcathinone. A methamphetamine derivative, PMMA, was detected in three hair specimens (cases 1-3, Table 2). All three hair samples were also found to contain cocaine and ketamine. Nonetheless, PMMA was not detected in the subjects’ concurrent urine sample.
 

Table 2. Emerging drugs of abuse detected in the study
 
A piperazine derivative, TFMPP, was detected in one urine specimen (case 4, Table 2), together with cocaine and ketamine. Nonetheless TFMPP was not detected in the parallel hair sample.
 
Methcathinone, also known as ephedrone, is a cathinone (beta-keto amphetamine) analogue. It was detected in combination with amphetamine, methamphetamine, and cocaine metabolite in one urine specimen (case 5, Table 2). No parallel hair specimen was available from this subject.
 
Conventional drugs of abuse
Analysis of the 964 urine samples revealed the presence of 19 types of conventional DOA (Fig 2a). Codeine was the most common, being detected in 47% of the urine samples, followed by methadone (35%), heroin (22%), methamphetamine (21%), ketamine (20%), zopiclone (20%), amphetamine (17%), midazolam (17%), and dextromethorphan (14%). Cocaine and cannabis were detected in 6% and 3% of urine samples, respectively.
 

Figure 2. Conventional drugs of abuse detected in (a) urine and (b) hair samples as a percentage of the total number of samples collected (964 urine and 1036 hair samples)
 
In hair specimens (1036 in total), 14 types of conventional DOA were detected (Fig 2b). Codeine (36%) and methadone (35%) were the most prevalent, followed by ketamine (34%), heroin (33%), methamphetamine (29%), dextromethorphan (28%), and zopiclone (26%). Cocaine and zolpidem were detected in 12% and 7% of the samples, respectively.
 
Ethnic minority
A subset of participants (n=130) were of South Asian ethnicity. These subjects donated 248 specimens in 130 episodes. Their drug use pattern was significantly different to that of Chinese. Comparison of urinalysis results revealed that South Asians had a significantly higher proportion of opiate use such as heroin, methadone, and codeine (P<0.001) as well as dextromethorphan (P<0.05; Fig 3a). On the contrary, ketamine, zopiclone, and diazepam (P<0.001) as well as cocaine and amphetamine (P<0.05) were detected at significantly higher rates in Chinese compared with South Asians. Analysis of hair specimens showed a largely similar pattern of discrepancy between the two ethnicities (Fig 3b).
 

Figure 3. Comparison of the drugs detected in (a) urine and (b) hair samples collected from Chinese (white bars) and South Asians (dark bars)
 
Collection site setting
The urine samples in the current study were collected from different settings: 38 samples from acute setting (A&E departments and HKPIC toxicology clinic); 885 samples from drug rehabilitation setting (substance abuse clinics, CCPSA and other rehabilitation centres); and 41 from a high-risk population (youth hangout). A comparison of drugs detected between the acute and rehabilitation settings revealed a significantly higher detection rate of ketamine and cocaine (P<0.001) in the former (Fig 4). Drugs such as codeine, methadone, heroin, zopiclone, and dextromethorphan were detected at higher rates in samples collected in a rehabilitation setting.
 

Figure 4. Comparison of the drugs detected in urine samples collected under the rehabilitation (white bars) and acute (dark bars) settings
 
Discussion
Emerging DOA are constantly being monitored worldwide by agencies such as the European Monitoring Centre for Drugs and Drug Addiction (EMCDDA). In 2008, 13 emerging DOA were reported for the first time to EMCDDA; by 2012, 73 new drugs had been reported within a year.1 Recent years have also seen the emergence of such designer drugs in Hong Kong, some of which have caused severe morbidity and fatalities.4 5 7 The early identification of emerging drugs enables prompt counteractive measures in terms of their clinical and social management, and the surveillance of emerging drugs in the population is increasingly being adopted globally as a proactive approach to combat drug abuse.13 14 15 In view of this, the present study was conducted to validate a locally developed LC-MS/MS method to screen for emerging DOA in the local population whilst simultaneously monitoring routine DOA. The study was conducted over a 21-month period. Multiple clinical and social service units from across the city collaborated in the study for a wider geographical coverage and more representative results. In 2013, approximately 10 069 drug abusers were reported in Hong Kong.16 This study population (972 subjects) was estimated to represent 9.7% of the total potential subjects. Regarding the response or participation rate, due to practical concerns and limited manpower, it was not possible for every collaborating unit to document fully the number of subjects approached or the number who refused consent.
 
The current results revealed the presence of three emerging drugs (PMMA, TFMPP, and methcathinone) in five specimens. This low prevalence is an expected finding due to the intrinsic nature of ‘emerging’ rather than ‘established’ drugs. Nevertheless, PMMA is a highly toxic methamphetamine derivative that has been sold on the drug market as MDMA (3,4-methylenedioxy-methamphetamine) substitute.8 The drug has been reported to have caused up to 90 fatalities worldwide over the years, including eight fatalities in Taiwan.8 17 In particular, PMMA-associated fatalities have also been reported recently in Hong Kong.6
 
On the other hand, TFMPP is a piperazine derivative with mild hallucinogenic effects and, when taken with another piperazine derivative benzylpiperazine (BZP), causes ecstasy-like effects.18 Piperazine derivatives are known to cause dissociative and sympathomimetic toxicity.19 The drug TFMPP was first reported in Hong Kong in 20104 and has been identified as an emerging drug in Ireland in recent years.15
 
Another emerging DOA detected in the study, methcathinone, gained popularity from the 1970s to 1990s, and was recently reported as a ‘re-emerging’ DOA in Sweden.14 It is an amphetamine-like stimulant and is among a group of synthetic cathinone compounds, commonly known as “bath salts”, that have been associated with numerous fatalities worldwide.20 Other highly toxic cathinone derivatives include mephedrone and MDPV (methylenedioxypyrovalerone),9 10 both of which are also covered in the analytical method but were not detected in the current study.
 
Of the conventional DOA, the opiates, methamphetamine, and ketamine were among the most frequently detected in this study. This is consistent with the data on reported drug abusers that was published by the local Central Registry of Drug Abuse.21 Since this manuscript focuses on screening for emerging DOA, detailed analysis of conventional drug use such as gender and age differences was not performed. However, an interesting finding was the observation that significantly higher proportions of South Asian drug abusers used opiates such as heroin, methadone, and codeine compared with Chinese; Chinese drug abusers were much more likely to use ketamine, cocaine, zopiclone, and diazepam. This highlights the ethnic differences in drug use and indicates that alternative approaches may be required for the clinical and social management of ethnic minorities in Hong Kong.
 
It is of interest to note the particularly high percentage of ketamine and cocaine detected in urine samples collected at A&E departments and toxicology clinic compared with the other collection sites. This may indicate that these drugs carry a more acute and severe toxicity profile relative to the other drugs with a consequent need of hospitalisation. A previous study on drug driving in Hong Kong also reported ketamine as the most prevalent drug detected in driver casualties who presented to the A&E department.22 Comparison of hair analysis results was not made here, since the main focus was on the difference between acute and non-acute cases; hair specimens would be less helpful since this biological matrix does not reflect recent exposure to drugs (see below for further discussion).
 
The present study showed a broadly similar pattern in urine and hair matrices in terms of the conventional DOA detected. Cocaine, dextromethorphan, and zolpidem were detected at higher rates in hair compared with urine, and may indicate the relatively high deposition efficiency of these drugs in hair matrix. It should be noted, however, that the metabolites of zolpidem were not included in the current assay, and may decrease its sensitivity for detection in urine. Urine and hair specimens have different ‘detection windows’, that is, they reflect different time frames of drug intake. Detection in urine indicates recent intake (within hours/days); thus this matrix is useful for the management of acute toxicity and drug overdose. The detection window of hair is much longer (weeks/months), enabling this matrix to be used for monitoring long-term drug use or abstinence.
 
When interpreting the results of the current study, it should be noted that some drugs may have been taken for therapeutic reasons, for example codeine, methadone, phentermine, or the tranquilizers/benzodiazepines. It was not possible in this study to differentiate medical use from abuse. It should also be noted that some drugs may be present as metabolites of others, for example temazepam and oxazepam (both of which are diazepam metabolites) and the emerging drug mCPP (metabolite of the antidepressant trazodone). Morphine is also the metabolite of codeine and heroin; it was only reported here as a drug in the absence of either codeine or heroin in the same sample.
 
Effective control of novel drugs depends on their early identification. A number of means to monitor emerging DOA have been proposed, such as conducting population surveys, analysing online test purchases, or wastewater analysis.3 Population surveys suffer the potential drawback of obtaining inaccurate data, since the actual identity of the drugs may differ from the claimed ingredients, for example, BZP being sold as ‘MDMA’ tablets.23 Analysing drug items purchased online is a costly approach due to the vast number of products available. Wastewater analysis may be used for monitoring conventional DOA, but the approach may not be easily adapted to the surveillance of emerging drugs due to the anticipated minute levels (ng/L range) in wastewater.24 All the above approaches require a considerable amount of financial and manpower resources. We propose the integration of emerging DOA surveillance into the routine drug monitoring of patients using the established analytical method. This surveillance approach is accurate, readily attainable, and is also achieved with minimal extra cost and effort since it is a convenient by-product of the routine drug monitoring of patients. Additionally, its applicability in A&E department patients allows the early identification of highly toxic novel drugs.
 
The proposed analytical method is LC-MS/MS–based, and offers several advantages over traditional DOA testing by immunoassay methods. First, development of an immunoassay is a lengthy process (in terms of years) involving the generation of antibodies. Immunoassay analysis also depends solely on the availability of commercial kits. These features do not favour early detection of new compounds given the protean nature of emerging drugs. In contrast, LC-MS/MS–based methods are much more versatile, permitting in-house enhancement of the method to allow detection of new compounds as soon as they enter the market. Second, although immunoassay methods require minimal capital investment, their running costs are high due to the generation of antibodies. On the other hand, LC-MS/MS methods require a high initial investment in analysers, but the running cost is lower in the long term as the reagents involved are relatively inexpensive. Lastly, unlike immunoassay methods that are only preliminary in nature and require further confirmatory testing, mass spectrometry analysis is already confirmatory with accurate and definitive results.
 
In addition to laboratory analysis, the emerging DOA surveillance team requires the expertise of medical doctors to keep a close watch on emerging drugs on the market, especially those with high clinical toxicity. Based on this ‘toxico-intelligence’, scientists should then enhance the analytical method to include such emerging substances. Hence, the effective control of emerging drugs will require a team of trained medical doctors and scientists, as well as versatile technology that enables the continual expansion of analytical coverage. In view of the resource requirements, specialised toxicology centres may be better suited for the purpose.
 
The present study has proven the concept that a locally developed analytical method is a valid tool to monitor emerging DOA whilst simultaneously performing regular DOA testing in patients. Implementation of the method in the routine drug monitoring of abusers will enable the continued and proactive surveillance of novel drugs in the population with minimal extra cost and effort. This surveillance gathers important information so that society can be prepared in terms of legislation, as well as social and clinical management of these potentially dangerous drugs. Further expansion of the analytical coverage will help keep abreast of the rapid and constant change in the designer drug scene.
 
Acknowledgements
This study was financially supported by the Beat Drugs Fund (Narcotics Division, Security Bureau of the HKSAR Government), project reference BDF101021. The authors are also grateful to all participants and for the generous assistance received from participating clinical divisions within the Hospital Authority and social service units.
 
Declaration
No conflicts of interests were declared by authors.
 
Appendix
Members of the Emerging Drugs of Abuse Surveillance Study Group:
YH Lam, MPhil1; WH Cheung, FHKCPsych, FHKAM (Psychiatry)2; Eva Dunn, FHKCPsych, FHKAM (Psychiatry)3; CK Wong, FHKCPsych, FHKAM (Psychiatry)3; YC Lo, MSc, FIBMS4; M Lam, FHKCPsych, FHKAM (Psychiatry)5; Michael Lee, MSc6; Angus Lau, MSW7; Albert KK Chung, FHKCPsych, FHKAM (Psychiatry)8; Sidney Tam, FHKCPath, FHKAM (Pathology)9; Ted Tam, BSW10; Vincent Lam, BA(Hon)11; Hezon Tang, MSW12; Katy Wan, BSocSc, MA13; Mamre Lilian Yeh, BA, MSc14; MT Wong, FHKCPsych, FHKAM (Psychiatry)15; CC Shek, FHKCPath, FHKAM (Pathology)16; WK Tang, MD, FHKAM (Psychiatry)17; Michael Chan, FRCPA, FHKAM (Pathology)18; Jeffrey Fung, FRCSEd, FHKAM (Emergency Medicine)19; SH Tsui, FRCP (Edin), FHKAM (Emergency Medicine)20; Albert Lit, FCEM, FHKAM (Emergency Medicine)21; Joe Leung, FHKCEM, FHKAM (Emergency Medicine)22
 
1 Toxicology Reference Laboratory, Hospital Authority, Hong Kong
2 Substance Abuse Assessment Unit, Kwai Chung Hospital, Hong Kong
3 Department of Psychiatry, Pamela Youde Nethersole Eastern Hospital, Hong Kong
4 Department of Pathology, Pamela Youde Nethersole Eastern Hospital, Hong Kong
5 Department of General Adult Psychiatry, Castle Peak Hospital, Hong Kong
6 Department of Clinical Pathology, Tuen Mun Hospital, Hong Kong
7 The Society of Rehabilitation and Crime Prevention, Hong Kong
8 Department of Psychiatry, Queen Mary Hospital, Hong Kong
9 Department of Pathology and Clinical Biochemistry, Queen Mary Hospital, Hong Kong
10 Youth Outreach, Hong Kong
11 Evergreen Lutheran Centre, Hong Kong Lutheran Social Service, the Lutheran Church — Hong Kong Synod
12 Cheer Lutheran Centre, Hong Kong Lutheran Social Service, the Lutheran Church — Hong Kong Synod
13 Rainbow Lutheran Centre, Hong Kong Lutheran Social Service, the Lutheran Church — Hong Kong Synod
14 Operation Dawn Ltd (Gospel Drug Rehab Centre), Hong Kong
15 Department of Psychiatry, Kowloon Hospital, Hong Kong
16 Department of Pathology, Queen Elizabeth Hospital, Hong Kong
17 Department of Psychiatry, the Chinese University of Hong Kong, Hong Kong
18 Department of Chemical Pathology, Prince of Wales Hospital, Hong Kong
19 Accident and Emergency Department, Tuen Mun Hospital, Hong Kong
20 Accident and Emergency Department, Queen Mary Hospital, Hong Kong
21 Accident and Emergency Department, Princess Margaret Hospital, Hong Kong
22 Accident and Emergency Department, Pamela Youde Nethersole Eastern Hospital, Hong Kong
 
References
1. European Monitoring Centre for Drugs and Drug Addiction (EMCDDA), Europol. New drugs in Europe, 2012. EMCDDA-Europol 2012 Annual Report on the implementation of Council Decision 2005/387/JHA; 2012.
2. Nelson ME, Bryant SM, Aks SE. Emerging drugs of abuse. Emerg Med Clin North Am 2014;32:1-28. Crossref
3. Brandt SD, King LA, Evans-Brown M. The new drug phenomenon. Drug Test Anal 2014;6:587-97. Crossref
4. Poon WT, Lai CF, Lui MC, Chan AY, Mak TW. Piperazines: a new class of drug of abuse has landed in Hong Kong. Hong Kong Med J 2010;16:76-7.
5. Tung CK, Chiang TP, Lam M. Acute mental disturbance caused by synthetic cannabinoid: a potential emerging substance of abuse in Hong Kong. East Asian Arch Psychiatry 2012;22:31-3.
6. The first mortality case of PMMA in Hong Kong. 本港首次發現服用毒品PMMA後死亡個案. RTHK. 2014 Feb 4. http://m.rthk.hk/news/20140204/982353.htm.
7. Tang MH, Ching CK, Tsui MS, Chu FK, Mak TW. Two cases of severe intoxication associated with analytically confirmed use of the novel psychoactive substances 25B-NBOMe and 25C-NBOMe. Clin Toxicol (Phila) 2014;52:561-5. Crossref
8. Lin DL, Liu HC, Yin HL. Recent paramethoxymethamphetamine (PMMA) deaths in Taiwan. J Anal Toxicol 2007;31:109-13. Crossref
9. Maskell PD, De Paoli G, Seneviratne C, Pounder DJ. Mephedrone (4-methylmethcathinone)-related deaths. J Anal Toxicol 2011;35:188-91. Crossref
10. Durham M. Ivory wave: the next mephedrone? Emerg Med J 2011;28:1059-60. Crossref
11. Tang MH, Ching CK, Lee CY, Lam YH, Mak TW. Simultaneous detection of 93 conventional and emerging drugs of abuse and their metabolites in urine by UHPLC-MS/MS. J Chromatogr B Analyt Technol Biomed Life Sci 2014;969:272-84. Crossref
12. The fitness for purpose of analytical methods: A laboratory guide to method validation and related topics. EURACHEM Working Group; 1998.
13. Archer JR, Dargan PI, Hudson S, Wood DM. Analysis of anonymous pooled urine from portable urinals in central London confirms the significant use of novel psychoactive substances. QJM 2013;106:147-52. Crossref
14. Helander A, Beck O, Hägerkvist R, Hultén P. Identification of novel psychoactive drug use in Sweden based on laboratory analysis—initial experiences from the STRIDA project. Scand J Clin Lab Invest 2013;73:400-6. Crossref
15. O’Byrne PM, Kavanagh PV, McNamara SM, Stokes SM. Screening of stimulants including designer drugs in urine using a liquid chromatography tandem mass spectrometry system. J Anal Toxicol 2013;37:64-73. Crossref
16. Newly/previously reported drug abusers by sex. Central Registry of Drug Abuse. Available from: http://www.nd.gov.hk/statistics_list/doc/en/t11.pdf. Accessed 3 Feb 2015.
17. European Monitoring Centre for Drugs and Drug Addiction (EMCDDA). EMCDDA risk assessments: Report on the risk assessment of PMMA in the framework of the joint action on new synthetic drugs; 2003.
18. Arbo MD, Bastos ML, Carmo HF. Piperazine compounds as drugs of abuse. Drug Alcohol Depend 2012;122:174-85. Crossref
19. Wood DM, Button J, Lidder S, Ramsey J, Holt DW, Dargan PI. Dissociative and sympathomimetic toxicity associated with recreational use of 1-(3-trifluoromethylphenyl) piperazine (TFMPP) and 1-benzylpiperzine (BZP). J Med Toxicol 2008;4:254-7. Crossref
20. Zawilska JB, Wojcieszak J. Designer cathinones—an emerging class of novel recreational drugs. Forensic Sci Int 2013;231:42-53. Crossref
21. Reported drug abusers by sex by common type of drugs abused. Central Registry of Drug Abuse. Available from: http://www.nd.gov.hk/statistics_list/doc/en/t15.pdf. Accessed 11 Jul 2014.
22. Wong OF, Tsui KL, Lam TS, et al. Prevalence of drugged drivers among non-fatal driver casualties presenting to a trauma centre in Hong Kong. Hong Kong Med J 2010;16:246-51.
23. Wood DM, Dargan PI, Button J, et al. Collapse, reported seizure—and an unexpected pill. Lancet 2007;369:1490. Crossref
24. van Nuijs AL, Gheorghe A, Jorens PG, Maudens K, Neels H, Covaci A. Optimization, validation, and the application of liquid chromatography-tandem mass spectrometry for the analysis of new drugs of abuse in wastewater. Drug Test Anal 2014;6:861-7. Crossref

Duplex sonography for detection of deep vein thrombosis of upper extremities: a 13-year experience

Hong Kong Med J 2015 Apr;21(2):107–13 | Epub 27 Feb 2015
DOI: 10.12809/hkmj144389
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Duplex sonography for detection of deep vein thrombosis of upper extremities: a 13-year experience
Amy SY Chung, MSc, MHKCRRT1; WH Luk, FRCR, FHKAM (Radiology)2; Adrian XN Lo, FRCR, FHKAM (Radiology)3; CF Lo, PDDR1
1Department of Radiology, United Christian Hospital, Kwun Tong, Hong Kong
2Department of Radiology, Princess Margaret Hospital, Laichikok, Hong Kong
3Department of Radiology, Hong Kong Adventist Hospital, 40 Stubbs Road, Hong Kong
Corresponding author: Dr Amy SY Chung (chungsya@gmail.com)
 Full paper in PDF
Abstract
Objectives: To determine the prevalence and characteristics of sonographically evident upper-extremity deep vein thrombosis in symptomatic Chinese patients and identify its associated risk factors.
 
Design: Case series.
 
Setting: Regional hospital, Hong Kong.
 
Patients: Data on patients undergoing upper-extremity venous sonography examinations during a 13-year period from November 1999 to October 2012 were retrieved. Variables including age, sex, history of smoking, history of lower-extremity deep vein thrombosis, major surgery within 30 days, immobilisation within 30 days, cancer (history of malignancy), associated central venous or indwelling catheter, hypertension, diabetes mellitus, sepsis within 30 days, and stroke within 30 days were tested using binary logistic regression to understand the risk factors for upper-extremity deep vein thrombosis.
 
Main outcome measures: The presence of upper-extremity deep vein thrombosis identified.
 
Results: Overall, 213 patients with upper-extremity sonography were identified. Of these patients, 29 (13.6%) had upper-extremity deep vein thrombosis. The proportion of upper-extremity deep vein thrombosis using initial ultrasound was 0.26% of all deep vein thrombosis ultrasound requests. Upper limb swelling was the most common presentation seen in a total of 206 (96.7%) patients. Smoking (37.9%), history of cancer (65.5%), and hypertension (27.6%) were the more prevalent conditions among patients in the upper-extremity deep vein thrombosis–positive group. No statistically significant predictor of upper-extremity deep vein thrombosis was noted if all variables were included. After backward stepwise logistic regression, the final model was left with only age (P=0.119), female gender (P=0.114), and history of malignancy (P=0.024) as independent variables. History of malignancy remained predictive of upper-extremity deep vein thrombosis.
 
Conclusions: Upper-extremity deep vein thrombosis is uncommon among symptomatic Chinese population. The most common sign is swelling and the major risk factor for upper-extremity deep vein thrombosis identified in this study is malignancy.
 
 
New knowledge added by this study
  •  Data suggest that upper-extremity deep vein thrombosis among ethnic Chinese is different from western population.
Implications for clinical practice or policy
  •  Patients with a history of malignancy should be given priority for ultrasound screening of upper-extremity deep vein thrombosis.
 
 
Introduction
It has been a long-held notion that in United Christian Hospital in Hong Kong, requests for upper-extremity vein sonography to screen for deep vein thrombosis (DVT) were rare. This may have been because upper-extremity deep vein thrombosis (UEDVT) was considered a benign phenomenon and not an urgent condition. However, UEDVT potentially carries certain risks like pulmonary embolism (PE), and leads to morbidity and mortality. Therefore, understanding the associated risk factors would help in improving the ability to predict and prevent the risk of PE.
 
In the past decade, most of the research focused on identification and management of lower-extremity deep vein thrombosis (LEDVT), because UEDVT was believed to be clinically insignificant and quite rare, representing less than 2% of DVT.1 A study by Baarslag et al2 in 2004, however, reported that around half of their patients with UEDVT died during the follow-up period. More recent studies have challenged this belief.3 4 5 In 2004, Chan et al6 reported a study comparing Chinese and Caucasian patients, and showed prevalence of LEDVT was different between the two populations (9.1% proximal LEDVT without prophylaxis for Chinese and 16% proximal LEDVT with prophylaxis for Caucasians). This suggested that a study to assess the prevalence of UEDVT in Chinese population needs to be undertaken.
 
There are many imaging strategies to aid diagnosis of UEDVT. When comparing the different strategies, contrast venograms and computed tomography (CT) venograms require the injection of contrast agents and involve radiation. With magnetic resonance venogram, however, no radiation is involved and can be performed without contrast injection. Unfortunately, the use of magnetic resonance venogram is limited by its high cost and inconvenience associated with the procedure. On the other hand, colour duplex sonography is relatively cheap and more easily available. Colour duplex sonography provides excellent sensitivity and specificity as shown in a study by Köksoy et al7 in which the sensitivity and specificity were 94% and 96%, respectively. According to these authors, the downside is that this technique cannot completely exclude the presence of thrombus in axillary, subclavian, superior vena cava, or brachiocephalic vessels.7 The presence of UEDVT may only be inferred from secondary signs such as absence of respiratory variation and cardiac plasticity.8 In view of its safety and cost-effectiveness, duplex sonography is usually preferred as the first-line imaging technique in the evaluation of UEDVT.
 
The aims of this study were to determine the prevalence and characteristics of sonographically evident UEDVT in symptomatic Chinese patients and identify the associated risk factors.
 
Methods
Methodology
A retrospective study was conducted in a regional hospital in a district where the socio-economic status was similar to the rest of the population in Hong Kong.9 The study sample was comprised of patients undergoing an initial duplex sonography of the upper extremity for suspicion of UEDVT during the period November 1999 to October 2012. The study began with an initial search on the computerised Radiology Information System of the Hong Kong Hospital Authority and patients undergoing duplex sonography of upper- or lower-extremity veins were identified. From the radiology reports, positive cases of DVT (both UEDVT and LEDVT) were sourced using key words “incomplete compressibility”, “non-compressible”, “incompressible”, “not compressible”, or “compressibility: (no)”. The search was further narrowed down to retrieve patients with radiology reports and images of all upper-extremity vein sonography using key words in reports like “upper extremity vein” or “upper limb vein”.
 
Since the demographic profile of Hong Kong is mainly ethnic Chinese, our study included only Chinese patients who underwent initial upper-extremity sonography for the detection of UEDVT within the defined period. Studies that were incomplete for any reason and patients who had a positive finding of UEDVT from a previous scan were excluded. Medical record search was performed for the selected patients through the electronic Patient Record System.
 
Data collection and analysis
The medical records were reviewed and data on patient demographic characteristics, possible risk factors, and co-morbidities were collected. All confidential patient data were de-identified and each patient was assigned a study number before analysis. Standardised data collection charts were used to gather information, and details of information recorded are shown in Table 1.
 

Table 1. Summary of data recorded in data collection chart
 
The radiology reports and images were reviewed by two experienced, qualified radiologists, with each radiologist having more than 10 years of experience. The diagnosis of UEDVT was primarily based on the incomplete compressibility of the veins on sonography.3 When Doppler evaluation was used, absence of flow, lack of respiratory variation, or cardiac plasticity were used as secondary criteria for diagnosis.3 Central lines were considered to be present if mentioned in the sonography report, in the medical record, or documented on chest radiography, venography, CT or other imaging modality within 4 weeks prior to sonography. The catheter size and catheter material were not considered or correlated, as such information was not readily available retrospectively. Patients who presented with a history of vigorous exercise within 4 weeks of UEDVT were classified as effort-related.10 In contrary, when no forceful activity of limb or predisposing factor was observed before onset of symptoms, UEDVT was classified as idiopathic or spontaneous.9 Any discrepancies in the report or findings were addressed according to a consensus by the two reviewing radiologists.
 
Preliminary data analysis was performed using descriptive statistics. The mean values of patient’s age and frequency distribution among both genders were calculated in the UEDVT-negative and UEDVT-positive groups. t Test was used to examine the differences in age between the two groups and P<0.05 was regarded as significant. The frequency distributions of signs and symptoms including swelling, extremity discomfort, erythema, dyspnoea, chest pain, and cough were compared in the two groups. The frequency proportions of the variables in the two groups were calculated. Variables including age, sex, history of smoking and LEDVT, major surgery within 30 days, immobilisation within 30 days, cancer (history of malignancy), associated CVC (central venous or indwelling catheter), hypertension, diabetes mellitus, sepsis within 30 days, and stroke within 30 days were tested using binary logistic regression. Using backward stepwise logistic regression, the variables with the highest P values were eliminated one by one until all the remaining variables had P≤0.2, and P<0.05 was considered significant. The most prevalent risk factor in the UEDVT-positive group was identified and compared with data from Caucasian population. All statistical comparisons were done using the Statistical Package for the Social Sciences (Windows version 19.0; SPSS Inc, Chicago [IL], US).
 
Results
Between November 1999 and October 2012, 11 019 patients had undergone upper- or lower-extremity vein ultrasound examinations in the hospital. Major proportion of requests (10 783 patients, 97.9%) was for lower-extremity vein ultrasound. Ultrasound diagnosis of DVT (UEDVT and LEDVT) was seen in 822 (7.6%) patients, of which UEDVT was seen in 34 (4.1%) patients and LEDVT in 788 (95.9%) patients during that period.
 
Overall there were 236 upper-extremity vein ultrasound requests, of which 23 patients (5 out of 23 patients had UEDVT) were excluded as they did not meet the inclusion criteria (an initial upper-extremity vein sonography). A total of 213 patients were included in the study sample; UEDVT was diagnosed in 29 (13.6%) of the study sample (Fig). Therefore, the proportion of UEDVT diagnosed by initial ultrasound was only 0.26% (29/11 019) of all DVT (upper and lower extremity) ultrasound requests. The demographic characteristics of patients in the UEDVT-negative and UEDVT-positive groups are shown in Table 2.
 

Figure. Ultrasound images of (a) a patient diagnosed with breast carcinoma: it shows lack of colour signals inside the vein (thrombus formation); and (b) a patient with colon carcinoma in bed-bound palliative care: it shows large thrombus inside the vein lumen
 

Table 2. Age and sex distribution of patients
 
When comparing the age distribution between the two groups with t test, the results were not significant (P=0.06). In the UEDVT-negative group, 74 (40.2%) patients were males and 110 (59.8%) patients were females. There was no significant difference in age distribution among the two genders (P=0.394). Among the UEDVT-positive group, 15 (51.7%) patients were males and 14 (48.3%) were females. t Test to compare the age distribution between the two genders in this group was also not significantly different (P=0.257).
 
The frequency distributions of the signs and symptoms in the two groups are summarised in Table 3. Most patients in the UEDVT-negative group presented with upper limb swelling, and was seen in 178 (96.7%) patients. Even among the UEDVT-positive group patients, upper limb swelling was the most common sign, and was present in 28 (96.6%) patients.
 

Table 3. Frequency distribution of signs and symptoms in both UEDVT-negative and -positive groups
 
Statistical analysis and frequency proportion of variables in the two groups are summarised in Table 4. In the UEDVT-negative group, history of cancer, hypertension, and diabetes mellitus appeared to be the more prevalent variables and was seen in 82 (44.6%), 81 (44.0%) and 47 (25.5%), respectively. On the other hand, among the 29 patients in the UEDVT-positive group, history of smoking, history of cancer, and hypertension were the prevalent risk factors, and was seen in 11 (37.9%), 19 (65.5%) and 8 (27.6%) patients, respectively.
 

Table 4. Statistical analysis and frequency proportion of variables in the UEDVT-negative and -positive groups
 
Binary logistic regression was used to test the variables (Table 4). There were no statistically significant predictors of UEDVT if all variables were included. There was a trend towards higher risk of UEDVT in patients with a history of malignancy (odds ratio [OR]=2.250, P=0.071) but this was not statistically significant. Stepwise backward regression was performed to eliminate the independent variables with the highest P value until P≤0.2. The final regression model was left with only age, sex, and history of malignancy as independent variables, as the other variables persistently showed high P values (Table 5).
 

Table 5. Analysis of risk factors for UEDVT (remaining variables after backward stepwise regression)
 
In this study, the remaining variables in the model were age (P=0.119), female gender (P=0.114), and history of malignancy (P=0.024). History of malignancy remained predictive of UEDVT, and positive history of malignancy had an OR of 2.664 (95% confidence interval, 1.140-6.211) for the presence of UEDVT. In the UEDVT-positive group, there was no obvious predisposing cause observed in three patients. Therefore, these three (10.3%) patients were classified as having primary UEDVT, while the remaining 26 (89.7%) patients were classified as secondary UEDVT.
 
Discussion
In our study, the number of UEDVT cases diagnosed during the 13-year period using initial sonography was about 2.2 patients per year. As stated earlier, it has been a long-held perspective that UEDVT screening was a rare request in our hospital, and this is clearly evident from this study. Requests for UEDVT sonography constituted only 2.1% (236/11 019) of all extremity (upper and lower) vein ultrasound requests. The proportion of UEDVT diagnosed by initial ultrasound was only 0.26% of all DVT (upper and lower extremity) ultrasound requests, and therefore very rare.
 
Among 29 patients with UEDVT in our study, three patients presented with no obvious predisposing cause. One young healthy 32-year-old male claimed to have developed symptoms after exercise, and so this particular case was classified as primary effort-related thrombosis. Effort-related UEDVT often affected individuals who were young and healthy, with a male-to-female ratio of approximately 2:1.11 The incidence is higher in males and similar findings were also found in this study, and males were younger than females. Pain and swelling are commonly present in patients with UEDVT as shown in a study by Mustafa et al.4 Similarly, swelling was the most prevalent sign in our study, which was seen in 96.6% of patients, and represented the most common sign of UEDVT.
 
In our study, the prevalence of UEDVT among those undergoing ultrasound examinations for suspected UEDVT was 13.6%, and is the lowest when compared with other studies conducted among Caucasian population (18%,12 40%,13 25%,14 and 40%5). We also observed that there were fewer patients with indwelling catheters in our study sample compared with other studies (10.3% vs 11.6%,13 12%,12 23%,14 and 57%5). Earlier reports by Joffe et al3 suggested that indwelling catheter was the strongest predictor of UEDVT, and this may be the reason for the lower incidence in our study compared with other studies.
 
Overall, in our study it was found that history of smoking (37.9%), malignancy (65.5%), and hypertension (27.6%) were the common risk factors and particularly in UEDVT group (Table 4). Statistical analysis showed that a history of malignancy remained predictive of UEDVT. In our study, malignancy was a major risk factor for UEDVT, similar to studies conducted in Caucasian population.1 3 4 In our study, the frequency of cancer (65.5%) was even higher than those in Caucasian population in other studies, which had 43%,15 30%,16 38%,17 and 45%.4
 
Similar studies on Chinese population have already been published. Chen et al18 have investigated the differences in limb, age, and sex of Chinese patients with LEDVT. Abdullah et al19 studied the incidence of UEDVT associated with peripherally inserted central catheters. Liu et al20 estimated the incidence of venous thromboembolism instead of UEDVT in a study from a Hong Kong regional hospital. However, no study relating to prevalence of UEDVT comparing Chinese and western population have been performed. This study, while important, highlighted malignancy as the major risk factor for the prevalence of UEDVT. In a resource-limited health care system, patients with a history of malignancy should be prioritised in the triage of symptomatic patients referred for UEDVT screening, because malignancy is a major predictor of UEDVT and carries risk of PE. Such prioritisation will be beneficial to UEDVT patients as they can be identified and treated early.
 
Limitations
We employed retrospective observation in this study, and data were collected only from those available in the medical records. Therefore, the frequency of UEDVT reported might grossly underestimate the true number. The reason for this could be that signs and symptoms of UEDVT are usually non-specific, and as reported in other prospective studies many patients with UEDVT may remain completely asymptomatic.21
 
In our study, diagnosis of UEDVT was made solely by ultrasound. Studies have shown that ultrasound imaging has excellent sensitivity and specificity for LEDVT.22 23 In a study, the sensitivity had reached 97% to 100% and specificity of 98% to 99%.18 However, previous studies have reported lower sensitivity and specificity for upper-extremity ultrasound at 78% to 100% and 82% to 100%, respectively.18 19 There are several possible reasons why the sensitivity and specificity for detecting UEDVT are lower compared with LEDVT. One main reason is because of the anatomic drawback. The sternum and clavicle create acoustic shadowing or artefact on ultrasound imaging which limits the visualisation of proximal upper-extremity veins and thereby explains the relatively low sensitivity and specificity.3 Additionally, it would be difficult to visualise the centrally situated veins like the medial segment of the subclavian vein, the brachiocephalic vein, and their confluence with the superior vena cava.24 Moreover, the presence of a catheter might not only alter the venous tone, but also affect the venous flow making it more difficult to interpret the Doppler findings visualised on ultrasound. Further, differentiation between a normal vein and a large collateral in a patient with chronic venous thrombosis might sometimes be difficult.20 Another limitation of our study was the relatively small sample size, especially for catheter-related patients. Such small numbers might preclude subgroup analysis and lower the statistical power for identifying risk factors.
 
Conclusions
The major risk factor for UEDVT identified from this study is malignancy. Therefore, patients with a history of malignancy should be prioritised in the triage of symptomatic patients referred for UEDVT screening because malignancy is a major predictor of UEDVT and carries risk for PE.
 
References
1. Tilney ML, Griffiths HJ, Edwards EA. Natural history of major venous thrombosis of the upper extremity. Arch Surg 1970;101:792-6. Crossref
2. Baarslag HJ, Koopman MM, Hutten BA, et al. Long-term follow-up of patients with suspected deep vein thrombosis of the upper extremity: survival, risk factors and post-thrombotic syndrome. Eur J Intern Med 2004;15:503-7. Crossref
3. Joffe HV, Kucher N, Tapson VF, Goldhaber SZ; Deep Vein Thrombosis (DVT) FREE Steering Committee. Upper-extremity deep vein thrombosis: a prospective registry of 592 patients. Circulation 2004;110:1605-11. Crossref
4. Mustafa S, Stein PD, Patel KC, Otten TR, Holmes R, Silbergleit A. Upper extremity deep venous thrombosis. Chest 2003;123:1953-6. Crossref
5. Giess CS, Thaler H, Bach AM, Hann LE. Clinical experience with upper extremity sonography in a high-risk cancer population. J Ultrasound Med 2002;21:1365-70.
6. Chan YK, Chiu KY, Cheng SW, Ho P. The incidence of deep vein thrombosis in elderly Chinese suffering hip fracture is low without prophylaxis: a prospective study using serial duplex ultrasound. J Orthop Surg (Hong Kong) 2004;12:178-83.
7. Köksoy C, Kuzu A, Kutlay J, Erden I, Ozcan H, Ergîn K. The diagnostic value of colour Doppler ultrasound in central venous catheter related thrombosis. Clin Radiol 1995;50:687-9. Crossref
8. Marshall PS, Cain H. Upper extremity deep vein thrombosis. Clin Chest Med 2010;31:783-97. Crossref
9. Statistical tables of the 2006 population by-census. Available from: http://www.bycensus2006.gov.hk/en/data/data3/statistical_tables/index.htm#A2. Accessed 9 Dec 2014.
10. Joffe HV, Goldhaber SZ. Upper-extremity deep vein thrombosis. Circulation 2002;106:1874-80. Crossref
11. Illig KA, Doyle AJ. A comprehensive review of Paget-Schroetter syndrome. J Vasc Surg 2010;51:1538-47. Crossref
12. Kerr TM, Lutter KS, Moeller DM, et al. Upper extremity venous thrombosis diagnosed by duplex scanning. Am J Surg 1990;160:202-6. Crossref
13. Kröger K, Schelo C, Gocke C, Rudofsky G. Colour Doppler sonographic diagnosis of upper limb venous thromboses. Clin Sci (Lond) 1998;94:657-61.
14. Lee JA, Zierler BK, Zierler RE. The risk factors and clinical outcomes of upper extremity deep vein thrombosis. Vasc Endovascular Surg 2012;46:139-44. Crossref
15. Marinella MA, Kathula SL, Markert RJ. Spectrum of upper-extremity deep venous thrombosis in a community teaching hospital. Heart Lung 2000;29:113-7. Crossref
16. Isma N, Svensson PJ, Gottsäter A, Lindblad B. Upper extremity deep venous thrombosis in the population-based Malmö thrombophilia study (MATS). Epidemiology, risk factors, recurrence risk, and mortality. Thromb Res 2010;125:335-8. Crossref
17. Muñoz FJ, Mismetti P, Poggio R, et al. Clinical outcome of patients with upper-extremity deep vein thrombosis: results from the RIETE Registry. Chest 2008;133:143-8. Crossref
18. Chen F, Xiong JX, Zhou WM. Differences in limb, age and sex of Chinese deep vein thrombosis patients. Phlebology 2014 Feb 14. Epub ahead of print. Crossref
19. Abdullah BJ, Mohammad N, Sangkar JV, et al. Incidence of upper limb venous thrombosis associated with peripherally inserted central catheters (PICC). Br J Radiol 2005;78:596-600. Crossref
20. Liu HS, Kho BC, Chan JC, et al. Venous thromboembolism in the Chinese population—experience in a regional hospital in Hong Kong. Hong Kong Med J 2002;8:400-5.
21. Luciani A, Clement O, Halimi P, et al. Catheter-related upper extremity deep venous thrombosis in cancer patients: a prospective study based on Doppler US. Radiology 2001;220:655-60. Crossref
22. Prandoni P, Polistena P, Bernardi E, et al. Upper-extremity deep vein thrombosis. Risk factors, diagnosis, and complications. Arch Intern Med 1997;157:57-62. Crossref
23. Baarslag HJ, van Beek EJ, Koopman MM, Reekers JA. Prospective study of color duplex ultrasonography compared with contrast venography in patients suspected of having deep venous thrombosis of the upper extremities. Ann Intern Med 2002;136:865-72. Crossref
24. Chin EE, Zimmerman PT, Grant EG. Sonographic evaluation of upper extremity deep venous thrombosis. J Ultrasound Med 2005;24:829-38; quiz 839-40.

Prospective study on the effects of orthotic treatment for medial knee osteoarthritis in Chinese patients: clinical outcome and gait analysis

Hong Kong Med J 2015 Apr;21(2):98–106 | Epub 10 Mar 2015
DOI: 10.12809/hkmj144311
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Prospective study on the effects of orthotic treatment for medial knee osteoarthritis in Chinese patients: clinical outcome and gait analysis
Henry CH Fu, MB, BS, MMedSc1; Chester WH Lie, FRCS (Edin), FHKAM (Orthopaedic Surgery)2; TP Ng, FRCS (Edin), FHKAM (Orthopaedic Surgery)3; KW Chen, BSc4; CY Tse, BSc4; WH Wong, Diploma in Prosthetics and Orthotics4
1 Department of Orthopaedics and Traumatology, Queen Mary Hospital, Pokfulam, Hong Kong
2 Department of Orthopaedics and Traumatology, Kwong Wah Hospital, Yaumatei, Hong Kong
3 Private Practice, Hong Kong
4 Department of Prosthetics and Orthotics, Queen Mary Hospital, Pokfulam, Hong Kong
Corresponding author: Dr Chester WH Lie (chesterliewh@gmail.com)
 Full paper in PDF
Abstract
Objective: To evaluate the effectiveness of various orthotic treatments for patients with isolated medial compartment osteoarthritis.
 
Design: Prospective cohort study with sequential interventions.
 
Setting: University-affiliated hospital, Hong Kong.
 
Patients: From December 2010 to November 2011, 10 patients with medial knee osteoarthritis were referred by orthopaedic surgeons for orthotic treatment. All patients were sequentially treated with flat insole, lateral-wedged insole, lateral-wedged insole with subtalar strap, lateral-wedged insole with arch support, valgus knee brace, and valgus knee brace with lateral-wedged insole with arch support for 4 weeks with no treatment break. Three-dimensional gait analysis and questionnaires were completed after each orthotic treatment.
 
Main outcome measures: The Western Ontario and McMaster Universities Arthritis Index (WOMAC), visual analogue scale scores, and peak and mean knee adduction moments.
 
Results: Compared with pretreatment, the lateral-wedged insole, lateral-wedged insole with arch support, and valgus knee brace groups demonstrated significant reductions in WOMAC pain score (19.1%, P=0.04; 18.2%, P=0.04; and 20.4%, P=0.02, respectively). The lateral-wedged insole with arch support group showed the greatest reduction in visual analogue scale score compared with pretreatment at 24.1% (P=0.004). Addition of a subtalar strap to lateral-wedged insoles (lateral-wedged insole with subtalar strap) did not produce significant benefit when compared with the lateral-wedged insole alone. The valgus knee brace with lateral-wedged insole with arch support group demonstrated an additive effect with a statistically significant reduction in WOMAC total score (-26.7%, P=0.01). Compliance with treatment for the isolated insole groups were all over 90%, but compliance for the valgus knee brace–associated groups was only around 50%. Gait analysis indicated statistically significant reductions in peak and mean knee adduction moments in all orthotic groups when compared with a flat insole.
 
Conclusions: These results support the use of orthotic treatment for early medial compartment knee osteoarthritis.
 
 
New knowledge added by this study
  •  Our data support the use of the lateral-wedged insole with arch support and valgus knee brace in the management of medial compartment osteoarthritis of the knee; however, compliance with the valgus knee brace is fair. Gait analysis showed that both supports can reduce the knee adduction moment during walking.
Implications for clinical practice or policy
  •  Lateral-wedged insoles with arch support and valgus knee brace can be considered for patients with medial compartment osteoarthritis of the knee.
 
 
Introduction
Osteoarthritis of the knee is the commonest type of arthritis affecting the geriatric population. Conservative treatment with physiotherapy and analgesics provides temporary relief of symptoms, yet surgical intervention such as high tibial osteotomy, unicompartmental knee replacement, or total knee replacement is a major undertaking and not without risk.1 2 The medial compartment is more commonly affected than the lateral compartment in osteoarthritis (67% and 17%, respectively).3 Varus alignment of the lower limbs increases the risk of incident knee osteoarthritis and also increases the risk of disease progression in patients with osteoarthritis.4 Apart from static lower limb alignment, dynamic varus thrust during the gait cycle is also independently associated with osteoarthritis progression in the knee.5 Knee adduction moment (KAM) is an indirect means to assess varus thrust during the gait cycle. Previous studies have proven the validity of KAM for prediction of clinical and radiological osteoarthritis progression.6
 
Orthotic treatment can alter loading to the knee in the hope of reducing symptoms and disease progression. Biomechanical studies have demonstrated a small effect size in reduction of KAM with a valgus knee brace7 8 9 10 and lateral-wedged insoles.11 12 13 14 This study is the first to sequentially evaluate the clinical outcomes and gait analyses of different orthotic treatments in Chinese patients with medial compartment osteoarthritis.
 
Methods
Patients
From December 2010 to November 2011, 18 patients with isolated medial osteoarthritis of the knee were referred by orthopaedic surgeons to the Department of Prosthetics and Orthotics at Queen Mary Hospital for orthotic treatment.
 
The inclusion criteria were age older than 50 years and a diagnosis of osteoarthritis according to the American College of Rheumatology criteria.15 The predominant symptom needed to be medial knee pain. Radiographical features needed to include varus knee alignment and osteoarthritis of Kellgren-Lawrence grade 2 or above over the medial compartment.16
 
Our study population comprised patients with isolated medial compartment osteoarthritis, while patients with predominant lateral compartment or patellofemoral joint symptoms or those with radiographical features of osteoarthritis of Kellgren-Lawrence grade 2 or above over the lateral compartment or patellofemoral joint were excluded.
 
Patients with previous knee surgery, fixed flexion deformity of >10°, hip or ankle pathology, required a walking aid, or had morbid obesity (body mass index, >40 kg/m2), a dermatological condition, or peripheral vascular disease were also excluded.
 
This was a non-randomised prospective cohort study with a cross-over design. All 10 patients were sequentially treated with a flat insole (FI), lateral-wedged insole (LW), lateral-wedged insole with subtalar strap (LW+SS), lateral-wedged insole with arch support (LWAS), valgus knee brace (VKB), and valgus knee brace with lateral-wedged insole with arch support (VKB+LWAS). The FI group acted as a control during gait analysis to mimic normal walking. The designs of the orthotics are shown in Figure 1. The insoles were custom-made in the Department of Prosthetics and Orthotics at Queen Mary Hospital, while the Unloader valgus knee braces (Össur hf, Reykjavik, Iceland) were ordered for each patient after measurement. Each of the orthotic treatments was prescribed for 4 weeks and each patient underwent 24 weeks of treatment to use all six orthotics.
 

Figure 1. Various orthotic treatments: (a) valgus knee brace, (b and c) lateral-wedged insole with subtalar strap, (d) lateral-wedged insole, and (e and f) lateral-wedge with arch support
 
For subjective clinical outcomes, pain scores using the visual analogue scale (VAS) and version 3.1 of the Chinese-validated Western Ontario and McMaster Universities Arthritis Index (WOMAC) were measured. The VAS, with a scale from 0 to 10, was used purely for pain severity. The WOMAC score was ascertained by a self-administered questionnaire consisting of 24 items and subdivided into three categories: pain (5 items), stiffness (2 items), and difficulty performing daily activities (17 items). Analgesic use (number of times required per week) was also compared. Pretreatment and interval assessments were completed after each orthotic treatment. Paired t test was used for analysis.
 
Gait analysis
Three-dimensional gait analyses were performed for each patient both before and during use of each orthotic treatment at the gait laboratory at the Duchess of Kent Children’s Hospital, Hong Kong, which is an affiliated hospital within the same cluster as Queen Mary Hospital.
 
Fifteen retro-reflective markers were placed according to the Plug in Gait model (Vicon Industries, Inc, Edgewood [NY], US) as shown in Figure 2. The markers were placed at the bilateral anterior superior iliac spines, midway between the posterior superior iliac spine, lateral epicondyle of the knee, lateral lower third of the thigh, lateral malleolus, lower third of the shin, second metatarsal head, and calcaneus at the level of the second metatarsal head. Three-dimensional positions of the markers and kinematic data were collected by six cameras using the 370 motion analysis system (Vicon Industries, Inc) at a sampling frequency of 60 Hz. Kinetic data were collected using the 370 motion analysis system synchronised with a multicomponent force platform (Kistler, Winterthur, Switzerland) at 60 Hz.
 

Figure 2. Placement of retro-reflective markers (arrows) for gait analysis
 
Peak and mean KAMs during the stance phase of the gait cycle were measured. Mechanical alignment throughout the gait cycle was derived from the hip centre, knee centre, and ankle centre from the retro-reflective markers. After data collection from the gait analysis laboratory, data were analysed jointly by orthopaedic surgeons and prosthetic and orthotic specialists who had a background in biomedical engineering. Gait analysis comparison was made with the FI group and baseline control data. An assumption was made that the flat insole would not alter the knee kinematics. The control data from the gait laboratory consisted of 47 aged-matched healthy participants with normal gait pattern.
 
Paired t tests were used for comparison of different gait parameters between the orthotic type and baseline measurement.
 
Results
Eighteen patients (36 knees) were initially recruited into our study. Nineteen knees of 10 patients completed the study, and the remaining eight patients withdrew for personal reasons. Of the 10 patients, nine had bilateral disease and one had unilateral disease. Ten knees were right knees and nine were left knees. There were six women and four men. The mean age of the patients was 56 years (range, 51-65 years). The pretreatment motion arc ranged from 65° to 140° (mean, 122°).
 
The changes in mean WOMAC and VAS scores for various orthotic treatments and their comparison with pretreatment scores are shown in Table 1. The results of mean and peak KAMs throughout the gait cycle with different orthotics are shown in Figure 3a. The mean and peak KAMs for each orthotic are shown in Tables 2 and 3, respectively. Figure 3b shows the knee mechanical alignment derived from the hip centre, knee centre, and ankle centre. The initial 65% of the gait cycle represents the stance phase and the later 35% is the swing phase. Compliance with the orthotic treatments is shown in Figure 4.
 

Table 1. Comparison of subjective pretreatment and post-treatment scores for various orthotics
 

Table 2. Mean knee adduction moment for various orthotic treatments
 

Table 3. Peak knee adduction moment for various orthotic treatments
 

Figure 3. (a) Comparison of knee adduction moments with different orthotic treatments throughout the gait cycle. (b) Comparison of mean mechanical alignment with different orthotic treatments
 

Figure 4. Compliance with orthotic treatments expressed as percentage of total walking time
 
The LW group demonstrated a significant reduction of 19.1% in the WOMAC pain score (P=0.04). Reductions in total and other WOMAC subscale scores, VAS score, and analgesic requirement were observed, but none were statistically significant. Mean and peak KAMs were reduced by 18.1% and 13.1% (P<0.05), respectively, when compared with the FI group. The compliance rate was 94.7% of total walking time.
 
With the addition of subtalar strapping in the hope of increasing the effectiveness of the lateral wedge, the LW+SS group demonstrated a greater reduction of peak KAM (18.8%), but a smaller degree of reduction in mean KAM (17.6%) [P<0.05]. The net effect of LW+SS did not confer any statistically significant reduction in VAS score, WOMAC score, or analgesic requirement when compared with the pretreatment scores. The compliance rate for the LW+SS group was 94.7% of total walking time.
 
The LWAS group demonstrated statistically significant reduction in VAS score of 24.1% (P=0.004) and WOMAC pain score of 18.2% (P=0.04). Mean and peak KAMs were also significantly reduced by 9.7% and 13.7%, respectively (P<0.05). The degree of reduction in VAS score was greatest in the LWAS group when compared with the LW and LW+SS groups. Score of VAS may be a more reliable predictor of actual symptom improvement than the WOMAC pain score. The compliance rate was also greatest for the LWAS group at 97.4% of total walking time. No significant difference in analgesic requirement was observed.
 
With respect to mean mechanical alignment, as shown in Figure 3b, all the insole groups (LW, LW+SS, and LWAS) showed lower varus angle throughout the stance phase. The stance phase is the symptomatic phase when the knee is under loading.
 
The VKB group showed a statistically significant reduction in VAS score and WOMAC pain score of 15.5% (P=0.04) and 20.4% (P=0.02), respectively. The WOMAC total score and other subscale scores showed some reductions, but these were not statistically significant. The analgesic requirement was also significantly reduced from 1.5 days/week pretreatment to 0.5 days/week post-treatment (P=0.04). Mean and peak KAMs were reduced by 15.5% and 18.9%, respectively (P<0.05). Mechanical alignment, as seen in Figure 3b, showed reduced varus angulation during the early stance phase. The interval between 15% and 20% of the gait cycle, representing the heel strike to mid-stance phase, was shown to have reduced the varus angle when compared with baseline. The varus angle remained constant throughout the stance phase, which was related to restricted motion of the knee inside the brace. Compliance was significantly lower than that for any of the insole groups at 54.5% of the total walking time. The low compliance rate was likely due to the bulky size of the valgus knee brace causing skin discomfort, especially in the hot and humid climate in this region.
 
The LWAS seemed to be the best insole treatment for pain relief and improvement in VAS score, so we further evaluated the combination effects of the VKB and LWAS treatments. Additive effects were observed with combined treatment. The VKB+LWAS group showed significant reductions in VAS score, as well as WOMAC total and all subscale scores. Score of VAS reduced by 22.4% (P=0.004), WOMAC pain score reduced by 29.4% (P=0.001), WOMAC stiffness score reduced by 18.8% (P=0.02), WOMAC activities of daily living score reduced by 26.4% (P=0.002), and WOMAC total score reduced by 26.7% (P=0.001). The extent of reduction in the WOMAC total and subscale scores for this group was the greatest of the treatment groups. The analgesic requirement was also significantly reduced from 1.5 days/week pretreatment to 0.6 days/week post-treatment (P=0.04). Peak KAM showed the greatest reduction of all the orthotic groups of 21.0%, while mean KAM showed moderate reduction of 16.3% (P<0.05). With regard to the mechanical alignment, reduction in varus angle was observed in the early stance phase, as in the isolated VKB group. The compliance, as expected, was lowest among all the treatment arms with only 49.1% of total walking time.
 
Discussion
The current literature recommendations for orthotic treatment for medial compartment knee osteoarthritis are still varied. In a guideline by the Osteoarthritis Research Society International (OARSI), insoles were concluded to be of benefit to reduce pain and improve ambulation in knee osteoarthritis.17 However, in another guideline by the American Academy of Orthopaedic Surgeons (AAOS), it was concluded that lateral-wedged insoles could not be suggested for patients with symptomatic osteoarthritis.18 Lateral-wedged insoles have been shown to correct the femorotibial angle19 and reduce the peak external KAM.12 20 Toda et al21 were able to demonstrate a dose-response correction of the femorotibial angle using insoles with different elevations. The effect on subjective scores showed significant improvements in some,22 but not all studies.23 24 Two randomised controlled trials by Maillefert et al23 and Baker et al24 did not show statistically significant changes in WOMAC scores with lateral-wedged insoles, although there was a significant reduction in non-steroidal anti-inflammatory drug intake in the insole group.
 
Our results showed reduction in WOMAC pain score with LW and LWAS, but more importantly, a greater percentage reduction in VAS score with LWAS. Addition of subtalar strapping to lateral-wedged insoles was shown in other studies to improve VAS scores, and decrease the femorotibial angle25 and peak KAM26 when compared with a lateral-wedged insole alone. The potential drawbacks of subtalar strapping include increased sole pain.27 The results from our study did not demonstrate the additional benefit with subtalar strapping in terms of WOMAC score or mean KAM. With a significantly greater reduction in VAS with LWAS than LW (24.1% vs 10.3%) and a high compliance rate, we believe LWAS is the insole of choice and can be offered to patients with early isolated medial compartment knee osteoarthritis.
 
Knee bracing acts by inducing a valgus force by the three-point bending principle. The OARSI guideline suggests that knee bracing could reduce pain, improve stability, and reduce the risk of fall in patients with mild-to-moderate osteoarthritis or valgus instability.17 However, the guideline from the AAOS could not conclude for or against the use of valgus-directed bracing.18 Advantages of knee bracing include avoidance of surgery and the potential surgical complications, while the disadvantages include compliance and the cost of manufacturing the brace.28 A randomised controlled trial by Brouwer et al29 compared three treatment groups of valgus knee brace plus medical treatment, insole plus medical treatment, and medical treatment alone. The brace plus medical treatment was shown to have borderline benefit compared to medical treatment alone in terms of pain score and function.29 These findings concur with our study result of improved WOMAC pain subscale score and reduced analgesic requirement with valgus knee brace when compared to pretreatment scores. From the kinetics perspective, Pollo et al7 were able to demonstrate reduction in net external KAM by 13%. Our gait analysis model was able to reproduce reduction in mean KAM by 18.9%. Despite the potential benefits from valgus knee brace, compliance remains a major drawback. With a compliance rate of 54.5%, many of our patients claimed that they did not wear the braces outdoors due to skin discomfort in the hot and humid climate. Our evidence would suggest valgus knee brace is suitable for selected patients with mild knee osteoarthritis, with consideration of the problem with fitting and compliance.
 
Our current study was among the few to evaluate the effects of combination orthotic treatment with valgus knee brace and lateral-wedged insole with arch support. The VKB+LWAS group was the only one to demonstrate significant reductions in WOMAC total and all subscale scores, analgesic use, and KAM when compared with pretreatment. These results further reiterate the dose-response relationship in reducing KAM to achieve improvement in objective knee scores. Despite these findings, the poor compliance rate would render this orthotic treatment less advisable.
 
Limitations
Limitations of our study included a small sample size, selection bias, self-selection bias, and a short follow-up period. Similar studies of less than 20 patients are seen in many studies of gait analysis.30 31 32 A larger sample size would provide a higher power to determine the statistical significance in more of the evaluated parameters. Compliance with orthotic treatment, in particular with the valgus knee brace, was another concern. Confounding factors in our study included the frequency of weight-bearing activities, which could be difficult to quantify.
 
This was a cross-over study, with all patients having to be treated sequentially with all six orthotic combinations. The advantages are an economy of sample size without the need to account for heterogeneity of the patient groups. The disadvantages of the design include lack of a treatment break and lack of randomisation in the treatment sequence. Scores of VAS reported by elderly people may also be inaccurate.
 
Conclusions
Knee osteoarthritis continues to pose a significant burden to our community with its ageing population and increased incidence of obesity. While operative treatments are not without risk, orthotic treatment also has its advantages and disadvantages. Our current study was able to demonstrate from subjective scores and gait analysis that orthotic treatment can alter knee loading and alleviate symptoms. The lateral-wedged insole with arch support is optimal, while valgus knee brace is equally effective, with fair compliance. Further studies with a larger sample size are required to evaluate the effectiveness in the long term.
 
References
1. Knutson K, Lindstrand A, Lidgren L. Survival of knee arthroplasties. A nation-wide multicentre investigation of 8000 cases. J Bone Joint Surg Br 1986;68:795-803.
2. Swedish Knee Arthroplasty Registry. SKAR Annual Report; 2011.
3. Ledingham J, Regan M, Jones A, Doherty M. Radiographic patterns and associations of osteoarthritis of the knee in patients referred to hospital. Ann Rheum Dis 1993;52:520-6. Crossref
4. Sharma L, Song J, Dunlop D, et al. Varus and valgus alignment and incident and progressive knee osteoarthritis. Ann Rheum Dis 2010;69:1940-5. Crossref
5. Chang A, Hayes K, Dunlop D, et al. Thrust during ambulation and the progression of knee osteoarthritis. Arthritis Rheum 2004;50:3897-903. Crossref
6. Birmingham TB, Hunt MA, Jones IC, Jenkyn TR, Giffin JR. Test-retest reliability of the peak knee adduction moment during walking in patients with medial compartment knee osteoarthritis. Arthritis Rheum 2007;57:1012-7. Crossref
7. Pollo FE, Otis JC, Backus SI, Warren RF, Wickiewicz TL. Reduction of medial compartment loads with valgus bracing of the osteoarthritic knee. Am J Sports Med 2002;30:414-21.
8. Lindenfeld TN, Hewett TE, Andriacchi TP. Joint loading with valgus bracing in patients with varus gonarthrosis. Clin Orthop Relat Res 1997;344:290-7. Crossref
9. Pagani CH, Böhle C, Potthast W, Brüggemann GP. Short-term effects of a dedicated knee orthosis on knee adduction moment, pain, and function in patients with osteoarthritis. Arch Phys Med Rehabil 2010;91:1936-41. Crossref
10. Toriyama M, Deie M, Shimada N, et al. Effects of unloading bracing on knee and hip joints for patients with medial compartment knee osteoarthritis. Clin Biomech (Bristol, Avon) 2011;26:497-503. Crossref
11. Hinman RS, Bowles KA, Bennell KL. Laterally wedged insoles in knee osteoarthritis: do biomechanical effects decline after one month of wear? BMC Musculoskelet Disord 2009;10:146. Crossref
12. Fantini Pagani CH, Hinrichs M, Brüggemann GP. Kinetic and kinematic changes with the use of valgus knee brace and lateral wedge insoles in patients with medial knee osteoarthritis. J Orthop Res 2012;30:1125-32. Crossref
13. Butler RJ, Marchesi S, Royer T, Davis IS. The effect of a subject-specific amount of lateral wedge on knee mechanics in patients with medial knee osteoarthritis. J Orthop Res 2007;25:1121-7. Crossref
14. Kakihana W, Akai M, Nakazawa K, Naito K, Torii S. Inconsistent knee varus moment reduction caused by a lateral wedge in knee osteoarthritis. Am J Phys Med Rehabil 2007;86:446-54. Crossref
15. Belo JN, Berger MY, Koes BW, Bierma-Zeinstra SM. The prognostic value of the clinical ACR classification criteria of knee osteoarthritis for persisting knee complaints and increase of disability in general practice. Osteoarthritis Cartilage 2009;17:1288-92. Crossref
16. Kellgren JH, Lawrence JS. Radiological assessment of osteo-arthrosis. Ann Rheum Dis 1957;16:494-502. Crossref
17. Zhang W, Moskowitz RW, Nuki G, et al. OARSI recommendations for the management of hip and knee osteoarthritis, Part II: OARSI evidence-based, expert consensus guidelines. Osteoarthritis Cartilage 2008;16:137-62. Crossref
18. Jevsevar DS, Brown GA, Jones DL, et al. The American Academy of Orthopaedic Surgeons evidence-based guideline on: treatment of osteoarthritis of the knee, 2nd edition. J Bone Joint Surg Am 2013;95:1885-6.
19. Yasuda K, Sasaki T. The mechanics of treatment of the osteoarthritic knee with a wedged insole. Clin Orthop Relat Res 1987;215:162-72.
20. Shimada S, Kobayashi S, Wada M, et al. Effects of disease severity on response to lateral wedged shoe insole for medial compartment knee osteoarthritis. Arch Phys Med Rehabil 2006;87:1436-41. Crossref
21. Toda Y, Tsukimura N, Kato A. The effects of different elevations of laterally wedged insoles with subtalar strapping on medial compartment osteoarthritis of the knee. Arch Phys Med Rehabil 2004;85:673-7. Crossref
22. Sasaki T, Yasuda K. Clinical evaluation of the treatment of osteoarthritic knees using a newly designed wedged insole. Clin Orthop Relat Res 1987;221:181-7.
23. Maillefert JF, Hudry C, Baron G, et al. Laterally elevated wedged insoles in the treatment of medial knee osteoarthritis: a prospective randomized controlled study. Osteoarthritis Cartilage 2001;9:738-45. Crossref
24. Baker K, Goggins J, Xie H, et al. A randomized crossover trial of a wedged insole for treatment of knee osteoarthritis. Arthritis Rheum 2007;56:1198-203. Crossref
25. Toda Y, Tsukimura N. A six-month followup of a randomized trial comparing the efficacy of a lateral-wedge insole with subtalar strapping and an in-shoe lateral-wedge insole in patients with varus deformity osteoarthritis of the knee. Arthritis Rheum 2004;50:3129-36. Crossref
26. Kuroyanagi Y, Nagura T, Matsumoto H, et al. The lateral wedged insole with subtalar strapping significantly reduces dynamic knee load in the medial compartment gait analysis on patients with medial knee osteoarthritis. Osteoarthritis Cartilage 2007;15:932-6. Crossref
27. Brouwer RW, Jakma TS, Verhagen AP, Verhaar JA, Bierma-Zeinstra SM. Braces and orthoses for treating osteoarthritis of the knee. Cochrane Database Syst Rev 2005;(1):CD004020.
28. Hanypsiak BT, Shaffer BS. Nonoperative treatment of unicompartmental arthritis of the knee. Orthop Clin North Am 2005;36:401-11. Crossref
29. Brouwer RW, van Raaij TM, Verhaar JA, Coene LN, Bierma-Zeinstra SM. Brace treatment for osteoarthritis of the knee: a prospective randomized multi-centre trial. Osteoarthritis Cartilage 2006;14:777-83. Crossref
30. Foroughi N, Smith RM, Lange AK, Baker MK, Fiatarone Singh MA, Vanwanseele B. Dynamic alignment and its association with knee adduction moment in medial knee osteoarthritis. Knee 2010;17:210-6. Crossref
31. van den Noort JJ, van der Esch M, Steultjens MP, et al. Ambulatory measurement of the knee adduction moment in patients with osteoarthritis of the knee. J Biomech 2013;46:43-9. Crossref
32. Kutzner I, Trepczynski A, Heller MO, Bergmann G. Knee adduction moment and medial contact force—facts about their correlation during gait. PLoS One 2013;8:e81036. Crossref

Predictive factors for colonoscopy complications

Hong Kong Med J 2015 Feb;21(1):23–9 | Epub 30 Jan 2015
DOI: 10.12809/hkmj144266
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Predictive factors for colonoscopy complications
Annie OO Chan, MB, BS, MD1; Louis NW Lee, MB, BS, FRCS (Edin)2; Angus CW Chan, MB, ChB, MD2; WN Ho, BHSs (Nursing)2; Queenie WL Chan, BHSs (Nursing)3; Silvia Lau, MPH, MSc4; Joseph WT Chan, MB, BS, FRCOG5
1 Gastroenterology & Hepatology Centre, Hong Kong Sanatorium & Hospital, Happy Valley, Hong Kong
2 Endoscopy Centre, Hong Kong Sanatorium & Hospital, Happy Valley, Hong Kong
3 Nursing Administration Department, Hong Kong Sanatorium & Hospital, Happy Valley, Hong Kong
4 Medical Physics & Research Department, Hong Kong Sanatorium & Hospital, Happy Valley, Hong Kong
5 Hospital Administration Department, Hong Kong Sanatorium & Hospital, Happy Valley, Hong Kong
Corresponding author: Dr Queenie WL Chan (wlchan@hksh.com)
 Full paper in PDF
Abstract
Objective: To determine factors predicting complications caused by colonoscopy.
 
Design: Prospective cohort study.
 
Setting: A private hospital in Hong Kong.
 
Patients: All patients undergoing colonoscopy in the Endoscopy Centre of the Hong Kong Sanatorium & Hospital from 1 June 2011 to 31 May 2012 were included. Immediate complications were those that were recorded by nurses during and up to the day after the examination, while delayed complications were gathered 30 days after the procedure by way of consented telephone interview by trained student nurses. Data were presented as frequency and percentage for categorical variables. Logistic regression was used to fit models for immediate and systemic complications with related factors.
 
Results: A total of 6196 patients (mean age, 53.7 years; standard deviation, 12.7 years; 3143 women) were enrolled and 3657 telephone interviews were completed. The incidence of immediate complications was 15.3 per 1000 procedures (95% confidence interval, 12.3-18.4); 50.5% were colonoscopy-related, including one perforation and other minor presentations. Being female (odds ratioadjusted=1.6), use of monitored anaesthetic care (odds ratioadjusted=1.8), inadequate bowel preparation (odds ratioadjusted=3.5), and incomplete colonoscopy (odds ratioadjusted=4.5) were predictors of risk for all immediate complications (all predictors had P<0.05 by logistic regression). The incidence of delayed complications was 1.6 per 1000 procedures (95% confidence interval, 0.3-3.0), which comprised five post-polypectomy bleeds and one post-polypectomy inflammation. The overall incidence of complications was 17.8 per 1000 procedures (95% confidence interval, 13.5-22.1). The incidences of complications were among the lower ranges across studies worldwide.
 
Conclusion: Inadequate bowel preparation and incomplete colonoscopy were identified as factors that increased the risk for colonoscopy-related complications. Colonoscopy-related complications occurred as often as systemic complications, showing the importance of monitoring.
 
 
New knowledge added by this study
  •  The risks of local and systemic complications of colonoscopy are of paramount importance.
Implications for clinical practice or policy
  •  Enforcing bowel preparation and post-polypectomy care may reduce the risk of delayed complications.
 
 
Introduction
Colonoscopy is an efficient, invasive, and commonly used diagnostic tool with promising therapeutic capacity. Common colonoscopy-related complications include prolonged pain and distension, and rarely draw medical attention or lead to hospitalisation. Severe complications, including bleeding and perforation, are potentially life-threatening and require urgent management. Although death is uncommon, occurring in no more than 3 per 10 000 procedures, the incidence of post-polypectomy bleeding and perforation ranges from 1.6 to 14.8 and 0.2 to 1.0 per 1000 procedures, respectively.1 2 3 4 5 6 It is difficult to accurately benchmark direct colonoscopy-related complications due to the different outcome measure definitions used in studies. For example, some studies include immediate complications only, while others extend the complication period to 7 or 30 days, and some studies include an extensive list of complications while others include only bleeding and perforation.1 2 5 7 8 9 10 Furthermore, the efficacy and safety of the procedure vary across clinical settings and the targeted populations. With no available local data, consensus for complication incidence remains inconclusive.
 
Intravenous sedation is routinely used during colonoscopy to minimise the discomfort and pain associated with the procedure. Endoscopists are equipped to give sedatives and to monitor their side-effects, but anaesthetists are often invited to provide monitored anaesthetic care (MAC) when the patient is considered to be at high risk for complications, for instance, older patients and those with multiple co-morbidities are particularly vulnerable to complications. Systemic complications vary from prolonged drowsiness to fatal events such as cardiovascular or cerebrovascular events. Cardiovascular events following sedation, such as hypotension and myocardial infarction, during colonoscopy have been reported to range from 0.1 to 59.1 per 1000 procedures. Cerebrovascular events such as stroke range from 0.1 to 1.3 per 1000 procedures.3 6 10 The outcome variables are highly heterogeneous, for example, Nelson et al’s study3 included myocardial infarction, vasovagal event, and arrhythmia in the cardiovascular incidents, while Ma et al’s study10 recorded hypotension only.10 Some endoscopists opted to study complications related to the use of carbon dioxide (CO2) insufflation and absence of sedative use.11 12 13 This study aimed to record all complications systematically and to determine the relevant risk factors.
 
Methods
This prospective study collected data for all colonoscopies done from 1 June 2011 to 31 May 2012 at the Endoscopy Centre of the Hong Kong Sanatorium & Hospital (HKSH), which is a private hospital in Hong Kong. Prior to colonoscopy, patients were invited to give their written consent for their participation in the study, including for the 30-day follow-up telephone interview. The Hospital Management Committee involving the Research Ethics Committee of the HKSH approved the study.
 
The complications were recorded by nurses on a standard form during and immediately after the procedure. The standard audit forms for immediate and delayed colonoscopy complications were designed by a research doctor, with the most common complications based on literature review.
 
The immediate complications audit form included patients’ demographics, use of sedation/analgesic/antispasmodic, use of MAC, gross indications for colonoscopy (therapeutic or diagnostic), type of therapeutic procedures performed such as polypectomy, the reason for incomplete colonoscopy (caecum intubation failure), quality of bowel preparation (adequate – good/adequate or not – fair/poor, which was rated by the endoscopist), and the use of CO2 insufflation. Complication data were divided into systemic and colonoscopy-related complications. For systemic complications, we captured data for nausea/vomiting, hypotension (systolic blood pressure <100 mm Hg), bradycardia/tachycardia (heart rate <50 to >100 beats/min), vasovagal fainting, and other cardiovascular or cerebrovascular events. For colonoscopy-related complications, data for perforation, persistent pain/discomfort, abdominal distension, and haemorrhage were gathered.
 
Delayed complications were defined as the above events happening from the day after the initial colonoscopy to the 30th day that required readmission or admission to other hospitals. For those readmissions, we automatically inspected the records for the reasons and interventions if the readmissions were complication-related. Otherwise, trained student nurses or a research doctor telephoned all consented participants to interview for the 30-day complications using the delayed complication audit form. A participant was declared lost to follow-up after three telephone attempts. The student nurses were trained by senior nurses and the research doctor with standard instructions.
 
Data analysis was performed by the Statistical Package for the Social Sciences (Windows version 14.0; SPSS Inc, Chicago [IL], US). Descriptive statistics (mean, percentage, incidence, and/or 95% confidence interval [CI]) were used to display the characteristics of the sample. For those complications with zero events, only 95% CIs were given.14 Backward logistic regression analyses were performed to draw prediction models for immediate complications, including colonoscopy-related complications or systemic complications, and overall complications, including immediate and delayed complications, from the sample; entering variables were chosen from age, sex (male or female), use of MAC (yes or no), sufficiency of bowel preparation (adequate or not), and completion of colonoscopy (yes or no) according to the results of the univariate analyses by Pearson Chi squared tests of all potential independent variables, with a significance level set at 10%; variable inclusion in the iteration was set at P<0.1 for backward logistic regression analyses. Returning coefficients of the variables were interpreted as adjusted odds ratio (ORadjusted) with 95% CI provided. All significance levels were set at two-sided α=0.05.
 
Results
A total of 6196 colonoscopies (3143 women; mean age 53.7 years; standard deviation, 12.7 years) were done during the study period. Most patients were aged between 45 and 64 years, were in-patients, had undergone diagnostic colonoscopy, and received intravenous sedation (60.5%, 70.5%, 53.0%, and 99.4% respectively; Table 1). The data for immediate complications were complete, while the 30-day follow-up was completed for 3657 procedures (803 were lost to follow-up and 1736 refused; compliance rate 59.0%; Table 2).
 

Table 1. Demographic characteristics, sedation, and colonoscopy data (n=6196)
 

Table 2. Thirty-day follow-up, readmission, and mortality
 
Of the 6196 colonoscopies, 2912 were therapeutic with 99.7% dedicated to polypectomy (Table 1). There were 73 (1.2%) cases of incomplete colonoscopy, 18 (24.7%) of which were due to inadequate preparation. Other reasons for incomplete colonoscopy were tumour obstruction (15 of 73; 20.5%) and intention of sigmoidoscopy or stent insertion (26 of 73; 35.6%). A total of 149 patients were readmitted within 30 days after the procedure, of which six (4.0%) were related to complications. The other reasons were cancer, gastro-intestinal disease, or cardiac events (Table 2).
 
Systemic complications
Regarding the choice of sedation, midazolam and pethidine were the most used at 81.5% and 82.0%, respectively, while 15.9% of patients underwent MAC. Immediate complications reported included hypotension, vasovagal fainting, and nausea/vomiting, or a combination (6.1, 0.6, 0.3, and 0.5 per 1000 procedures, respectively; Table 3). There were no severe cardiovascular events such as heartbeat irregularity or myocardial infarction or cerebrovascular events such as stroke. Furthermore, none of the patients reported delayed systemic complications in the 30-day follow-up.
 

Table 3. Incidence of overall complications (per 1000 procedures) and their interventions
 
Modelling to study the potential risk factors showed that being female, use of MAC, and inadequate bowel preparation were the significant independent predictors for systemic immediate complications (ORadjusted=2.0, 2.6, and 3.7, respectively; all were P<0.05; Table 4).
 

Table 4. Multivariate analysis of risk predictors for complications by different models
 
Colonoscopy-related complications
Immediate colonoscopy-related complications were recorded for 48 patients (7.7 per 1000 procedures), including extensive pain/discomfort, abdominal distension, and perforation (Table 3). The only perforation was at the sigmoid-rectal junction and was due to adhesion, which might be related to previous abdominal surgery (total hysterectomy and bilateral salpingo-oophorectomy was done 10 years previously). In the 30-day follow-up, six patients reported complication-related readmissions, five of whom were for bleeding after discharge from hospital and one was for inflammation; all were caused by polypectomy.
 
Modelling was done to formulate a predictive algorithm for significant independent risk factors and outcome events (Table 4). Inadequate bowel preparation and incomplete colonoscopy were the significant predictors for immediate colonoscopy-related complications (ORadjusted=3.5 and 6.2, respectively; both were P<0.05) and for all immediate complications (ORadjusted=3.5 and 4.5, respectively; both were P<0.05). In the model for all immediate complications, being female and use of MAC were also predictors (ORadjusted=1.6 and 1.8, respectively; all were P<0.05).
 
A predictive model was also done for overall complications, including all immediate and delayed complications among those who completed follow-up at 30 days. The effect of previous significant predictors was transient in that they did not predict the overall complications occurring in the 30-day post-colonoscopy period. Monitored anaesthetic care was the only predictor during the 30-day period (ORadjusted=2.0; P=0.019).
 
Discussion
Significance of this study
Inadequate bowel preparation and incomplete colonoscopy were identified as risk factors for colonoscopy-related complications. Other complications were mostly hypotension and abdominal distension. No myocardial infarction, transient ischaemic attack, or death relating to colonoscopy was reported.
 
Contribution of individual characteristics to the complications
Colonoscopy is rarely a complication-free procedure, but a good understanding of the possible complications can help to minimise them. While experienced endoscopists, diagnostic procedures, and young patients are protective factors for colonoscopy complications, trainee endoscopists, therapeutic procedures, advanced age, female sex, obesity, co-morbidity, anticoagulant use, and previous abdominal surgery are risk factors for complications.3 5 6 10 15 16 17 The relatively low incidence of complications recorded in this study could be attributed to the fact that more than half of the procedures were diagnostic, thus reducing the potential for polypectomy-associated complications.
 
Other events—such as abdominal pain and distension, hypotension, vasovagal fainting, and nausea/vomiting—accounted for 98.9% of all immediate complications; these events were mostly reported by women (60.6%; ORadjusted=1.6; Table 4). This result is consistent with the literature.17 This effect could possibly be explained by different perceptions of somato-sensation, which could be traced back to the socio-emotional cultivation and cultural expectation of the different sexes.18
 
Hidden factors that may contribute to complications
Despite the sex effect, use of MAC, inadequate bowel preparation, and incomplete colonoscopy were all related to immediate complications. The use of MAC, in particular, requires more interpretation because it has not been found to be a risk factor in other studies and its use ought not be a cause of complications. However, MAC is designed for patients who are vulnerable to the complications of sedation and the procedure, especially those with co-morbidities and who are at an advanced age. In this study, co-morbidity was not reviewed, but patients who underwent MAC were significantly older than those who did not (mean age, 56.1 vs 53.3 years; P<0.001, t test) which may be partly contributory. However, age was not significantly associated with any of the complications after adjustment for other factors. Age might have a greater impact if co-morbidity was also considered and this may be an area for further study.
 
Inadequate bowel preparation, in which faeces obscure the inner lining of the colon, impedes the vision and heightens the risk for complications. Likewise, incomplete colonoscopy due to inadequate bowel preparation or unbearable discomfort inevitably increases the risk for complications. These results are consistent with other studies.2 3 10
 
A large proportion (59%) of the study population completed the study. Per protocol univariate analyses showed that use of MAC was the sole significant factor related to overall complications (Table 4).
 
Other factors
Sedation-free colonoscopy is a feasible alternative that could reduce the risk of systemic complications; use of CO2 insufflation might increase its tolerability while maintaining visibility. In this study, a sedation-free procedure was performed in only 36 patients, with no complications; CO2 insufflation was done for 647 (10.9%) patients instead of gas (room air), with only minor complications encountered (two patients reported abdominal distension and seven reported hypotension/vasovagal fainting). Insufflation with CO2 could be adopted more widely because it is non-explosive, absorbable, and does not affect the mucosal blood flow, thus minimising discomfort and the risk of colonic ischaemia. According to Bretthauer et al,19 CO2 insufflation leads to quicker recovery, and less pain and complications. These advantages are supported by other studies including local research.11 19 20 These factors might inspire greater use of CO2 insufflation and minimise use of sedation for selected patients.12 13 21
 
Inadequate bowel preparation not only hampers completion of the procedure, but also increases the risk for complications. As evidenced from the prediction models, inadequate bowel preparation significantly increases the occurrence of complications. In addition to implementing the current standardised bowel preparation protocol, enforcement of patient education, compliance, and early admission for monitored bowel preparation might help to further suppress the rate for inadequate bowel preparation.
 
Limitations
Many of the patients were lost to contact by telephone. In this study, we assumed that the pattern of missing patients was random and was not affected by whether or not the patients had complications. However, self-selection bias might exist. If patients without complications were more likely to be lost to contact, the rate of delayed complications would be overestimated. However, the rate of delayed complications would be underestimated if the patients had been admitted to other hospitals or had died. Endoscopist experience, and patient characteristics of co-morbidities and severe symptoms before the procedure were not recorded for analysis in this study. These are potential risk factors for complications. In view of the importance of monitoring the complications of colonoscopy, further study might include the missing variables of co-morbidity, endoscopist experience, symptoms at presentation and oxygen level, and explore factors such as the influence of body mass index and medical history, sedation dose, indication for colonoscopy, use of laxatives and compliance, and post-colonoscopy diagnosis/pathology that might help to minimise the variance on prediction of complications.
 
Acknowledgements
We would like to thank Dr Sheri Lim, former Administrative Officer (Medical) who proposed and developed the audit. We are grateful to Dr KM Lai, Head of Department of Anaesthesiology for his expert advice on the anaesthetic terminology; Dr Raymond Yung and Dr KN Lai, Assistant Medical Superintendents for their valuable suggestions and encouragement; Ms Grace Wong, Medical Records Manager for coordinating with different departments; and Ms Sara Fung, former Research Nurse for coordinating the project. Also, thanks to all doctors who contributed their knowledge and data, for this study would not have been accomplished without them.
 
References
1. Lorenzo-Zúñiga V, Moreno de Vega V, Doménech E, Mañosa M, Planas R, Boix J. Endoscopist experience as a risk factor for colonoscopic complications. Colorectal Dis 2010;12(10 Online):e273-7.
2. Gastrointestinal endoscopy, version 1. Australasian Clinical Indicator Report 2003-2010. Australian Council on Healthcare Standards; 2010.
3. Nelson DB, McQuaid KR, Bond JH, Lieberman DA, Weiss DG, Johnston TK. Procedural success and complications of large-scale screening colonoscopy. Gastrointest Endosc 2002;55:307-14. CrossRef
4. Rathgaber SW, Wick TM. Colonoscopy completion and complication rates in a community gastroenterology practice. Gastrointest Endosc 2006;64:556-62. CrossRef
5. Rabeneck L, Paszat LF, Hilsden RJ, et al. Bleeding and perforation after outpatient colonoscopy and their risk factors in usual clinical practice. Gastroenterology 2008;135:1899-1906, 1906.e1.
6. Viiala CH, Zimmerman M, Cullen DJ, Hoffman NE. Complication rates of colonoscopy in an Australian teaching hospital environment. Intern Med J 2003;33:355-9. CrossRef
7. Ko CW, Riffle S, Michaels L, et al. Serious complications within 30 days of screening and surveillance colonoscopy are uncommon. Clin Gastroenterol Hepatol 2010;8:166-73. CrossRef
8. Singh H, Penfold RB, DeCoster C, et al. Colonoscopy and its complications across a Canadian regional health authority. Gastrointest Endosc 2009;69:665-71. CrossRef
9. Teruki O, Keiichi O. The safety of endoscopic day surgery for colorectal polyps. Dig Endosc 2008;20:92-5. CrossRef
10. Ma WT, Mahadeva S, Kunanayagam S, Poi PJ, Goh KL. Colonoscopy in elderly Asians: a prospective evaluation in routine clinical practice. J Dig Dis 2007;8:77-81. CrossRef
11. Wong JC, Yau KK, Cheung HY, Wong DC, Chung CC, Li MK. Towards painless colonoscopy: a randomized controlled trial on carbon dioxide-insufflating colonoscopy. ANZ J Surg 2008;78:871-4. CrossRef
12. Bayupurnama P, Nurdjanah S. The success rate of unsedated colonoscopy examination in adult. Internet Journal of Gastroenterology 2010;9:2.
13. Ylinen ER, Vehviläinen-Julkunen K, Pietilä AM, Hannila ML, Heikkinen M. Medication-free colonoscopy—factors related to pain and its assessment. J Adv Nurs 2009;65:2597-607. CrossRef
14. Ho AK. When the numerator is zero: another lesson on risk. Am Biol Teach 2009;71:531-3. CrossRef
15. Miller A, McGill D, Bassett ML. Anticoagulant therapy, anti-platelet agents and gastrointestinal endoscopy. J Gastroenterol Hepatol 1999;14:109-13. CrossRef
16. Dobbins C, DeFontgalland D, Duthie G, Wattchow DA. The relationship of obesity to the complications of diverticular disease. Colorectal Dis 2006;8:37-40. CrossRef
17. Ko CW, Riffle S, Shapiro JA, et al. Incidence of minor complications and time lost from normal activities after screening or surveillance colonoscopy. Gastrointest Endosc 2007;65:648-56. CrossRef
18. Kroenke K, Spitzer RL. Gender differences in the reporting of physical and somatoform symptoms. Psychosom Med 1998;60:150-5. CrossRef
19. Bretthauer M, Lynge AB, Thiis-Evensen E, Hoff G, Fausa O, Aabakken L. Carbon dioxide insufflation in colonoscopy: safe and effective in sedated patients. Endoscopy 2005;37:706-9. CrossRef
20. Welchman S, Cochrane S, Minto G, Lewis S. Systematic review: the use of nitrous oxide gas for lower gastrointestinal endoscopy. Aliment Pharmacol Ther 2010;32:324-33. CrossRef
21. Takahashi Y, Tanaka H, Kinjo M, Sakumoto K. Sedation-free colonoscopy. Dis Colon Rectum 2005;48:855-9. CrossRef
 
Find HKMJ in MEDLINE:
 

Role of fine-needle aspiration cytology in human immunodeficiency virus–associated lymphadenopathy: a cross-sectional study from northern India

Hong Kong Med J 2015 Feb;21(1):38–44 | Epub 21 Nov 2014
DOI: 10.12809/hkmj144241
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Role of fine-needle aspiration cytology in human immunodeficiency virus–associated lymphadenopathy: a cross-sectional study from northern India
Naveen Kumar, MD1; BB Gupta, MD1; Brijesh Sharma, MD1; Manju Kaushal, MD2; BB Rewari, MD1; Deepak Sundriyal, MD1
1 Department of Medicine, PGIMER and Dr RML Hospital, New Delhi 110001, India
2 Department of Pathology, PGIMER and Dr RML Hospital, New Delhi 110001, India
 
Corresponding author: Dr Naveen Kumar (docnaveen2605@yahoo.co.in), (2605docnaveen@gmail.com)
 Full paper in PDF
Abstract
Objective: To evaluate the role of fine-needle aspiration cytology in the diagnosis of human immunodeficiency virus (HIV)–associated lymphadenopathy.
 
Design: Case series.
 
Setting: Tertiary care teaching hospital, India.
 
Patients: Fifty consecutive HIV-positive patients, who presented with lymphadenopathy at the out-patient department and antiretroviral therapy clinic.
 
Results: Tubercular lymphadenitis was the most common diagnosis, reported in 74% (n=37) of patients; 97.2% of them were acid-fast bacilli–positive. Reactive lymphadenitis and fungal lymphadenitis were present in 10 and 1 cases, respectively. The most common cytomorphological pattern of tubercular lymphadenitis was necrotising suppurative lymphadenitis, present in 43.2% (n=16) of patients. Of eight biopsies done in reactive cases, six turned out to be tubercular lymphadenitis. Fine-needle aspiration cytology had a sensitivity of 83.7% for diagnosing tubercular lymphadenitis.
 
Conclusion: Necrotising suppurative lymphadenitis should be recognised as an established pattern of tubercular lymphadenitis. Reactive patterns should be considered inconclusive rather than a negative result, and re-evaluated with lymph node biopsy. Fine-needle aspiration cytology is an excellent test for diagnosing tubercular lymphadenitis in HIV-associated lymphadenopathy.
 
 
New knowledge added by this study
  •  Necrotising suppurative lymphadenitis should be recognised as an established pattern of tubercular lymphadenitis.
  •  In advanced human immunodeficiency virus (HIV) disease, reactive lymphadenitis should be considered inconclusive rather than a negative result, and re-evaluated with lymph node biopsy.
Implications for clinical practice or policy
  •  As lymphadenopathy is common in all stages of HIV disease, judicious use of fine-needle aspiration cytology can be helpful in diagnosing associated opportunistic infections and other pathological conditions.
 
 
Introduction
Human immunodeficiency virus (HIV) infection is an important worldwide public health problem. Developing nations, where resources are limited, are the worst affected nations. Until curative treatment for HIV infection becomes available, the crux of management is early diagnosis and treatment with highly active antiretroviral therapy.
 
As HIV is a lymphotropic virus, lymphoid tissues are the major anatomical site where the virus establishes itself during early infection. These lymphoid tissues act as reservoirs for the virus in the asymptomatic phase of infection. In the late stage, HIV disseminates from these sites to cause a full-blown acquired immunodeficiency syndrome (AIDS).1 Thus, lymph node involvement is found in all stages of infection. The cause of lymph node enlargement is often difficult to establish by history, physical examination, radiographic studies, and routine laboratory tests. Surgical biopsy is the gold standard for diagnosis. However, it has several drawbacks: costly, time-consuming, and requiring more elaborate precautions. Fine-needle aspiration cytology (FNAC) does not have any of these limitations, and is also comparatively less invasive. Furthermore, the cost of aspiration cytology is only 10% to 30% of that of surgical biopsy.2
 
We performed FNAC to establish the aetiological diagnosis in our study subjects with HIV infection. To detect false-negative results, biopsy was done in cases diagnosed as reactive lymphadenitis. The aims of the study were to assess the accuracy of FNAC and to correlate the findings with clinical and laboratory parameters like CD4 counts.
 
Methods
The study was conducted in the Departments of Medicine and Pathology, PGIMER and Dr RML Hospital, New Delhi, India, from January 2009 to December 2009. The study protocol and proforma were approved by the ethics committee of the institute. Informed written consent was obtained from all patients. Fine-needle aspiration cytology was performed in 50 consecutive HIV-positive patients presenting with lymphadenopathy at the antiretroviral clinic, or the out-patient or in-patient services. Detailed history was taken and examination of the patients was performed. Clinical stage (as per the World Health Organization [WHO] classification) and CD4 counts were recorded for all patients. Fine-needle aspiration cytology was performed by the clinician on the largest non-inguinal lymph node using standard precautions. The area was cleaned and draped. A 10-mL syringe and 23-gauge needles were used. If the sample was insufficient, another sample was taken from a different lymph node. Slides for Papanicolaou and Periodic-acid Schiff (PAS) stains were fixed with 95% ethanol immediately after preparing the smear; others were air dried. A total of six slides were prepared from each aspirate and were immediately processed by staining with Giemsa stain, Papanicolaou’s stain, Ziehl-Neelsen (ZN) stain for acid-fast bacilli (AFB), PAS stain for fungi, and Gram stain. Cases that were AFB-positive on ZN staining were diagnosed as tubercular lymphadenitis; otherwise, they were retained as suspected cases. Based on the presence or absence of granulomas, caseation (necrosis) and neutrophilic infiltration, tubercular lymph nodes were classified into four cytomorphological categories: granulomatous lymphadenitis (GL), necrotising granulomatous lymphadenitis (NGL), necrotising lymphadenitis (NL), and necrotising suppurative lymphadenitis (NSL). Lymph node biopsies were performed in cases which showed a reactive pattern or suspected tubercular lymphadenitis on FNAC. Sensitivity, specificity, and positive and negative predictive values were calculated for FNAC as a diagnostic modality compared with biopsy. Statistical analysis for association between FNAC findings and various parameters was done using univariate and multivariate logistic regression analyses. Data analysis was performed by the Statistical Package for the Social Sciences (Windows version 19.0; SPSS Inc, Chicago [IL], US). A P value of less than 0.05 was regarded as statistically significant.
 
Results
A total of 50 patients (43 men and 7 women) were included in the study. The mean age of the patients was 32.4 years. Cervical region was the most common site of lymphadenopathy (n=39; 78%) followed by axillary and inguinal regions. The lymph nodes were matted and generalised in 62% (n=31) and 48% (n=24) of cases, respectively. Generalised lymphadenopathy was present in 54% (n=20) of cases with tubercular lymphadenitis and 40% (n=4) of cases with reactive lymphadenitis. Nature of aspirate was bloody in 21 (42%) cases, caseous in 24 (48%), and mixed (with blood and caseation) in the remaining patients. The CD4 count ranged from 12 cells/µL to 353 cells/µL, with a mean count of 131 cells/µL. Most of the patients were in WHO clinical stage 3 (n=31; 62%).
 
The most common cytological diagnosis was tubercular lymphadenitis (n=37; 74%) followed by reactive pattern (n=10; 20%). Only one FNAC was diagnosed as fungal lymphadenitis showing PAS-positive spores of Histoplasma capsulatum (Fig 1). In two cases, the cytologies were suggestive of thyroid tissue and lipoma; these were treated as failed FNACs. Tubercular lymphadenitis was further categorised into four cytomorphological patterns, as shown in Table 1. All tubercular cases were AFB-positive except one which was a AFB-negative GL on FNAC. Subsequently, this was shown to be AFB-positive tubercular lymphadenitis on biopsy. Of the 10 cases reported as reactive lymph nodes on FNAC, eight gave consent for biopsy. Biopsy showed AFB-positive fibrocaseous tubercular lymphadenopathy in six out of these eight cases; in the remaining two cases, biopsy findings matched with the FNAC findings.
 

Figure 1. Histoplasma lymphadenitis: Periodic-acid Schiff–positive oval yeast cells with thick capsule (white arrow), both extracellular and intracellular (black arrow heads) in location (x 100)
 

Table 1. Cytomorphological patterns of tubercular lymphadenitis
 
All cases diagnosed as having mycobacterial disease on FNAC and those who underwent biopsy were included in the analysis. Hence 45 cases were analysed: 36 cases diagnosed as mycobacterial (tubercular) lymphadenitis on FNAC, one case of GL which was AFB-positive fibrocaseous tubercular lymph node on biopsy, and eight cases of reactive lymphadenopathy that underwent biopsy (Table 2). The sensitivity and negative predictive value of FNAC for diagnosing tubercular lymphadenitis were 83.7% and 22.2%, respectively. As AFB positivity was the requisite criterion for diagnosing tubercular lymphadenitis, it was expected that there would be no diagnosis of tuberculosis (TB) in any case which was AFB-negative on FNAC; thus, the specificity and positive predictive value were 100%.
 

Table 2. Analysis of 45 cases
 
We performed logistic regression analysis with tubercular lymphadenitis as the dependent variable and four parameters as covariates. On univariate analysis, CD4 count (P=0.016), nature of aspirate (P=0.013), and matted nodes on examination (P=0.028) were associated with tubercular aetiology on FNAC; lymph node distribution did not show any such association (P=0.401). However, in multivariate analysis, none of these factor was associated with tubercular aetiology on FNAC. Moreover, none of these factors was associated with the severe form (NL or NSL) of tubercular lymphadenitis either on univariate or multivariate analysis.
 
Discussion
Lymphadenopathy in HIV patients is very common; it can be a presenting feature in about 35% of patients with AIDS.3 Causes can be varied, depending on the stage of the disease, and may include persistent generalised lymphadenopathy, lymphoid malignancies, and opportunistic infection. All these can be easily and efficiently diagnosed by aspiration study of these lymph nodes. These causes of lymphadenopathies are important causes of death in AIDS patients. In this study, we aimed to investigate the performance of FNAC for the accurate diagnosis of lymphadenopathies. We also compared our results with those from similar Indian and western studies (Table 3 1 4 5 6 7 8 9 10 11 12 13 14 15 16).
 

Table 3. Comparison of our FNAC results with those from previous studies1 4 5 6 7 8 9 10 11 12 13 14 15 16
 
Tuberculosis is the most frequent opportunistic infection in HIV patients.17 18 Lymph nodes are the commonest site of extra-pulmonary TB in patients with AIDS.19 20 Using FNAC as the diagnostic modality, we also found tubercular lymphadenitis to be the most common cause of lymphadenopathy, present in 74% of our patients. Similar conclusion was drawn in other Indian studies; however they reported a prevalence of 34.2% to 60% (Table 3). The high prevalence of tubercular lymphadenitis in our series may be related to the low immunity of the majority of patients; 41 out of 50 patients had CD4 counts of <200 cells/µL.
 
There are two specific pathological criteria for diagnosing tubercular lymphadenitis—caseation and granuloma formation. Both are less likely to be present in tubercular lymphadenitis associated with advanced HIV disease. This is because T-cell function, which is suppressed in advanced HIV disease, is required for granuloma formation. On the basis of these two findings, tubercular lymphadenitis is classified into three categories21 22: GL, NGL, and NL.
 
The GL pattern can occur due to several causes. However, in a country like India, where TB is very common, this pattern is considered to be due to TB until proven otherwise. We found this pattern in 5.4% (2 out of 37 cases with tubercular lymphadenitis) of TB cases, a finding similar to that in other previous studies where it ranged from 4.3% to 28.5% (Table 4).6 9 11 12 16 23 The NGL pattern, with both caseation and epithelioid granulomas, is the most typical pattern of tubercular lymphadenitis. It was present in 18.9% (7 out 37 cases of tubercular lymphadenitis) of our cases; other studies have reported it in the range of 26.6% to 74% (Table 4).9 11 12 16 23 Necrotising lymphadenitis represents the most severe cytomorphological pattern of tubercular lymphadenitis. There is complete necrosis with only ‘acellular’ debris. It is not labelled as ‘purulent’ because there are no degenerated polymorphonuclear cells. Complete necrosis reflects impaired cell-mediated immunity in this group of patients. Cases of NGL can be wrongly labelled as NL if material is aspirated from that part of the node which contains only caseation. This was the second most common pattern reported in our study (32.4%; 12 out of 37 cases of tubercular lymphadenitis). In other studies, the reported prevalence rates range from 8.6% to 57.1% (Table 4).6 9 11 12 16 23
 

Table 4. Cytomorphological patterns in tubercular lymphadenitis: comparison with previous studies6 9 11 12 16 23
 
The NSL pattern of tubercular lymphadenitis (Fig 2) was the most common cytomorphological picture, seen in 43.2% (16 out of 37 cases of tubercular lymphadenitis) of patients. This pattern was reported in 20% and 13% cases of tubercular lymphadenitis by Nayak et al11 and Shenoy et al,12 respectively (Table 4). Jayaram and Chew6 reported this pattern in 67% of their TB cases in Kuala Lumpur, Malaysia. Although reported in these case series, unlike the other three patterns, the NSL pattern is not yet a well-recognised cytomorphological type of tubercular lymphadenitis. However, this pattern is important, especially in HIV patients. If ZN staining is not done, the thin caseation commonly present in these cases can be mistaken for pus, and the case wrongly labelled as pyogenic lymphadenitis. Similar observations have been made in some studies.6 11 12 24
 

Figure 2. Liquefied necrotic material (black arrow head) with infiltration of polymorphs (white arrow), giving impression of suppurative lymphadenitis. No epithelioid cells or giant cells are seen (Giemsa staining, x 40)
 
Studies have shown that FNAC is more sensitive for the diagnosis of TB in HIV-positive patients than in seronegative patients.25 In our series, AFB positivity rate in TB cases was 97.2% (n=36/37), which was higher than that in previous studies (43.4% to 95.2%).6 9 11 12 16 This could be due to the fact that the disease was quite advanced in our group of tubercular lymphadenitis patients (mean CD4 count of 108 cells/µL). Moreover, as the cytomorphological pattern deteriorated and necrosis appeared, AFB positivity increased from 50% to 100%. This was in agreement with data from earlier studies.6 9 11 12 16 Chances of detecting AFB were least in lymph nodes showing GL pattern: four out of five studies6 9 11 12 16 did not report any AFB-positive case with this pattern of lymphadenopathy. Further, although TB is very common in HIV subjects, Mycobacterium avium complex (MAC) is not frequently seen in India. Its chance further decreases by adding MAC prophylaxis of azithromycin to the patient’s treatment regimen.
 
A reactive lymphadenitis was observed in only 20% of cases in our study. Most of the western studies reported it as the most common lymph node pathology, observed in 25.6% to 59.6% of cases (Table 3). One case of histoplasmosis was detected in our study (Fig 1). On Giemsa staining, the lymph node showed reactive lymphoid cells, histiocytes, areas of granuloma formation, along with sheets of Histoplasma capsulatum organism, located both extracellularly and intracellularly, which were positive on PAS staining. Hence, although the finding of GL without AFB in HIV-infected patients in India is taken as TB unless proven otherwise, causes like fungal infection (by PAS staining) should be excluded, especially if the CD4 count is low.
 
We performed a lymph node biopsy in 18% of our cases (Table 5). Among these, the findings were different from those in FNAC in 77.8% of the cases. False-negative rate in our study was 16.3% (7 out of 43 cases of TB; Table 2). The false-negative rate in other studies ranges from 2% to 9.2%.1 4 7 8 Of these seven false-negative cases, six were diagnosed as reactive nodes on FNAC which later showed fibrocaseous nodes on biopsy. Possible reasons for discordance could be focal tubercular involvement of the nodes. On FNAC, the tubercular area could have been missed and, hence, wrongly labelled as reactive cases. Also, different nodes in the same area can enlarge due to different pathologies. Hence, a report of reactive pattern in advanced disease, like in our group of patients (mean CD4 in reactive group being 196 cells/µL), does not have much value, and should not be the end of further assessment. It should be considered an inconclusive result rather than a negative one. It emphasises the importance of performing a biopsy in this group of patients.
 

Table 5. Comparison of lymph node biopsy results with those from previous studies1 4 7 8
 
A falling CD4 count in our group of patients was associated with increasing risk of tubercular lymphadenitis. However, CD4 counts did not predict the severity of cytomorphological forms of tubercular lymphadenitis. Of note, 41 out of 50 patients had CD4 counts of <200 cells/µL. Hence, we did not have a group of patients with higher CD4 counts in whom less severe forms of tubercular lymphadenitis were more common. This association should be studied further by recruiting patients with a wide range of CD4 counts.
 
In univariate analysis, a matted lymph node on examination (P=0.028) and caseous material (P=0.013) on aspiration were found more often in TB cases versus reactive cases. Hence, apart from routine cytology stains, these observations guide us for ordering special staining like ZN staining. However, their association with cytomorphological pattern of TB was not significant on univariate analysis, as the whole spectrum of pattern can have caseation on aspiration (except GL) and matted nodes on examination, although cytomorphologically these are of increasing severity.
 
Conclusion
Tuberculosis is the most common aetiology of HIV-associated lymphadenopathy in India. Acid-fast bacilli positivity is very high in HIV-associated tubercular lymphadenitis. We recommend routine AFB staining for all lymph nodes undergoing FNAC in HIV patients. Lymph nodes showing AFB-negative GL pattern on FNAC should be stained for fungus, especially if CD4 count is low. If a patient’s CD4 count is low, a reactive FNAC pattern should be taken as an inconclusive result and is an indication for biopsy. All lymph nodes showing NSL pattern on FNAC should undergo ZN staining in HIV-positive patients. It should be recognised as a tubercular cytomorphological pattern, especially in patients with low immunity like those with AIDS. Fine-needle aspiration cytology of lymph nodes is a valuable test for diagnosing tubercular lymphadenitis in HIV-associated lymphadenopathy.
 
References
1. Satyanarayana S, Kalighatgi AT, Murlidhar A, Prasad RS, Jawed KZ, Trehan A. Fine needle aspiration cytology of lymph node in HIV infected patients. Medical Journal Armed Forces India 2002;58:33-7. CrossRef
2. Kaminsky DB. Aspiration biopsy for community hospital. In: Johnston WW, editor. Masson monograph in diagnostic cytopathology. New York: Masson Publication; 1981: 12-3.
3. Guidelines for prevention and management of common opportunistic infection/malignancy among HIV-infected adult and adolescents. NACO. Ministry of Health & Family Welfare. Government of India; May 2007.
4. Martin-Bates E, Tanner A, Suvarna SK, Glazer G, Coleman DV. Use of fine needle aspiration cytology for investigating lymphadenopathy in HIV positive patients. J Clin Pathol 1993;46:564-6. CrossRef
5. Shapiro AL, Pincus RL. Fine-needle aspiration of diffuse cervical lymphadenopathy in patients with acquired immunodeficiency syndrome. Otolaryngol Head Neck Surg 1991;105:419-21.
6. Jayaram G, Chew MT. Fine needle aspiration cytology of lymph nodes in HIV-infected individuals. Acta Cytol 2000;44:960-6. CrossRef
7. Reid AJ, Miller RF, Kocjan GI. Diagnostic utility of fine needle aspiration (FNA) cytology in HIV-infected patients with lymphadenopathy. Cytopathology 1998;9:230-9. CrossRef
8. Bottles K, McPhaul LW, Volberding P. Fine-needle aspiration biopsy of patients with acquired immunodeficiency syndrome (AIDS): experience in an outpatient clinic. Ann Intern Med 1988;108:42-5. CrossRef
9. Llatjos M, Romeu J, Clotet B, et al. A distinctive cytologic pattern for diagnosing tuberculous lymphadenitis in AIDS. J Acquir Immune Defic Syndr 1993;6:1335-8.
10. Lowe SM, Kocjan GI, Edwards SG, Miller RF. Diagnostic yield of fine-needle aspiration cytology in HIV-infected patients with lymphadenopathy in the era of highly active antiretroviral therapy. Int J STD AIDS 2008;19:553-6. CrossRef
11. Nayak S, Mani R, Kavatkar AN, Puranik SC, Holla VV. Fine-needle aspiration cytology in lymphadenopathy of HIV-positive patients. Diagn Cytopathol 2003;29:146-8. CrossRef
12. Shenoy R, Kapadi SN, Pai KP, et al. Fine needle aspiration diagnosis in HIV related lymphadenopathy in Mangalore, India. Acta Cytol 2002;46:35-9. CrossRef
13. Saikia UN, Dey P, Jindal B, Saikia B. Fine needle aspiration cytology in lymphadenopathy of HIV-positive cases. Acta Cytol 2001;45:589-92. CrossRef
14. Gill PS, Arora DR, Arora B, et al. Lymphadenopathy. An important guiding tool for detecting hidden HIV-positive cases: a 6-year study. J Int Assoc Physicians AIDS Care (Chic) 2007;6:269-72. CrossRef
15. Shobhana A, Guha SK, Mitra K, Dasgupta A, Neogi DK, Hazra SC. People living with HIV infection / AIDS—a study on lymph node FNAC and CD4 count. Indian J Med Microbiol 2002;20:99-101.
16. Vanisri HR, Nandini NM, Sunila R. Fine-needle aspiration cytology findings in human immunodeficiency virus lymphadenopathy. Indian J Pathol Microbiol 2008;51:481-4. CrossRef
17. Harries AD. Tuberculosis and human immunodeficiency virus infection in developing countries. Lancet 1990;335:387-90. CrossRef
18. Sharma SK, Kadhiravan T, Banga A, Goyal T, Bhatia I, Saha PK. Spectrum of clinical disease in a series of 135 hospitalised HIV-infected patients from north India. BMC Infect Dis 2004;4:52. CrossRef
19. Shafer RW, Kim DS, Weiss JP, Quale JM. Extrapulmonary tuberculosis in patients with human immunodeficiency virus infection. Medicine (Baltimore) 1991;70:384-97. CrossRef
20. Arora VK, Kumar SV. Pattern of opportunistic pulmonary infections in HIV sero-positive subjects: observations from Pondicherry, India. Indian J Chest Dis Allied Sci 1999;41:135-44.
21. Das DK, Pant JN, Chachra KL, et al. Tuberculous lymphadenitis: correlation of cellular components and necrosis in lymph-node aspirate with A.F.B. positivity and bacillary count. Indian J Pathol Microbiol 1990;33:1-10.
22. Das DK. Fine needle aspiration cytology in diagnosis of tuberculous lesion. Lab Med 2000;31:625-32. CrossRef
23. Rajasekaran S, Gunasekaran M, Jayakumar DD, et al. Tuberculous cervical lymphadenitis in HIV positive and negative patients. Indian J Tuberc 2001;48:201-4.
24. Havlir DV, Barnes PF. Tuberculosis in patients with human immunodeficiency virus infection. N Engl J Med 1999;340:367-73. CrossRef
25. Shriner KA, Mathisen GE, Goetz MB. Comparison of mycobacterial lymphadenitis among persons infected with human immunodeficiency virus and seronegative controls. Clin Infect Dis 1992;15:601-5. CrossRef
 
Find HKMJ in MEDLINE:
 

Effectiveness of a new standardised Urinary Continence Physiotherapy Programme for community-dwelling older women in Hong Kong

Hong Kong Med J 2015 Feb;21(1):30–7 | Epub 7 Nov 2014
DOI: 10.12809/hkmj134185
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Effectiveness of a new standardised Urinary Continence Physiotherapy Programme for community-dwelling older women in Hong Kong
BS Leong, MSc, BScPT1,2; Nicola W Mok, PhD1
1 Department of Rehabilitation Sciences, Hong Kong Polytechnic University, Hunghom, Hong Kong
2 Elderly Health Service, Department of Health, Hong Kong
 
Corresponding author: Dr Nicola W Mok (nicola.mok@polyu.edu.hk)
 Full paper in PDF
Abstract
Objective: To examine the effectiveness of a standardised Urinary Continence Physiotherapy Programme for older Chinese women with stress, urge, or mixed urinary incontinence.
 
Design: A controlled trial.
 
Setting: Six elderly community health centres in Hong Kong.
 
Participants: A total of 55 women aged over 65 years with mild-to-moderate urinary incontinence.
 
Interventions: Participants were randomly assigned to the intervention group (n=27) where they received eight sessions of Urinary Continence Physiotherapy Programme for 12 weeks. This group received education about urinary incontinence, pelvic floor muscle training with manual palpation and verbal feedback, and behavioural therapy. The control group (n=28) was given advice and an educational pamphlet on urinary incontinence.
 
Results: There was significant improvement in urinary symptoms in the intervention group, especially in the first 5 weeks. Compared with the control group, participants receiving the intervention showed significant reduction in urinary incontinence episodes per week with a mean difference of -6.4 (95% confidence interval, -8.9 to -3.9; t= –5.3; P<0.001) and significant improvement of quality of life with a mean difference of -3.93 (95% confidence interval, -5.08 to -2.78; t= –6.9; P<0.001) measured by Incontinence Impact Questionnaire Short Form modified Chinese (Taiwan) version. The subjective perception of improvement, measured by an 11-point visual analogue scale, was markedly better in the intervention group (mean, 8.7; standard deviation, 1.0; 95% confidence interval, 8.4-9.1) than in the control group (mean, 1.4; standard deviation, 0.7; 95% confidence interval, 1.2-1.7; t=33.9; P<0.001). The mean treatment satisfaction in the intervention group was 9.5 (standard deviation, 0.8) as measured by an 11-point visual analogue scale.
 
Conclusions: This study demonstrated that the Urinary Continence Physiotherapy Programme was effective in alleviating urinary symptoms among older Chinese women with mild-to-moderate heterogeneous urinary incontinence.
 
 
New knowledge added by this study
  •  This standardised Urinary Continence Physiotherapy Programme is effective in improving various types of urinary incontinence of mild-to-moderate severity.
  •  The superior exercise compliance and treatment outcome in this study are likely attributed to the palpation and verbal feedback provided by physiotherapists during pelvic floor muscle training.
Implications for clinical practice or policy
  •  A standardised urinary continence programme consisting of education, supervised pelvic floor muscle training with palpation, and behavioural therapy is an effective first-line management for various types of urinary incontinence in a community setting.
 
 
Introduction
Urinary incontinence (UI), defined as “the complaint of any involuntary leakage of urine”,1 is a major clinical problem, and a significant cause of disability and dependency in the aged population. It is a condition with heterogeneous pathology and commonly classified as stress urinary incontinence (SUI), urge urinary incontinence (UUI), and mixed urinary incontinence (MUI) depending on the symptom behaviour. While the prevalence of UI in older women, globally, is estimated to range from 15% to 30%,2 the reported prevalence rate of UI in Hong Kong ranges from 20% to 52%.3 It has been acknowledged that UI is associated with profound adverse impact on the quality of life (QoL) of the sufferers.3 4 The impact of UI is so substantial that community-dwelling elderly with UI reported inferior physical and mental health, worse self-perceived health status, greater disability, and more depressive symptoms.5 In addition, the extent of the impact was shown to be associated with the severity of UI. Therefore, it is important to investigate a safe and effective treatment strategy in this population, especially in a community setting.
 
Conservative management has been recommended as the first-line management for UI. It is acknowledged that a variety of conservative management strategies which require patient’s active participation shows promising results for patients with UI. These include pelvic floor muscle training (PFMT),6 7 vaginal cones,8 bladder training (BT),9 and even combination of PFMT and general lumbopelvic mobilisation exercises.10 A recent Cochrane review7 suggested that PFMT, the ‘knack’ manoeuvre (a voluntary counterbracing type of contraction during physical stress), and BT are effective strategies in the management of UI in general. In particular, a combination of PFMT and BT was shown to have superior outcome than BT alone for the management of UUI and MUI.9 However, to date, there is insufficient conclusive evidence on the best approach for PFMT.11 In addition, the applicability and effectiveness of PFMT and BT for treating UI have not been properly evaluated in the elderly Chinese population, especially in randomised controlled studies. The aim of this study was to evaluate the effectiveness of a Urinary Continence Physiotherapy Programme (UCPP), which is a comprehensive programme involving education and exercise (PFMT and BT) components for managing SUI, UUI, and MUI in older Chinese women in a community setting.
 
Methods
A total of 60 subjects were recruited for screening by convenience sampling from six Elderly Health Centres (EHCs), Department of Health, Hong Kong. Inclusion criteria were Chinese females aged 65 years or older who had a clinical diagnosis of SUI, UUI, or MUI (with reference to the definition from International Continence Society1) of a mild-to-moderate severity (based on the scoring system by Lagro-Janssen et al12) which is made by the EHC medical officers in-charge. Exclusion criteria were active urinary tract infection, patients on diuretic medication, presence of bladder pathology or dysfunction due to genitourinary fistula, tumour, pelvic irradiation, neurological or other chronic conditions (eg diabetes mellitus, Parkinson’s disease), previous anti-incontinence surgery, significant cognitive impairment assessed by the Cantonese version of Mini-Mental State Examination Score (CMMSE13 with cutoffs of: ≤18 for illiterate subjects, ≤20 for those who had had 1 to 2 years of schooling, ≤22 for those who had had more than 2 years of schooling out of a maximum score of 30), obesity (body mass index [BMI] of >30 kg/m2), and use of concomitant treatments during the trial.
 
Randomisation was performed prior to the study by an off-site investigator using a computerised randomisation programme with allocation concealment by sequentially numbered, opaque, and sealed envelopes. After taking consent, grouping of the individual participants was revealed to the principal investigator by phone. Overall, 55 eligible participants were assigned to the intervention (n=27) or control (n=28) groups. The trial period lasted for 12 weeks. The study was approved by the Institutional Medical Research Ethics Committee and was conducted in accordance with the Declaration of Helsinki.14
 
Intervention protocol
One physiotherapist was responsible for delivering assessment and treatment to all subjects during the trial period of 12 weeks, and she was not blinded to the intervention.
 
The intervention group received a 30-minute individual training session at a pre-decided time of the day, once weekly for the first 4 weeks; and then once bi-weekly for the remaining 8 weeks. A total of eight treatment sessions were given to each recipient. There were three major components in the UCPP: education (anatomy of the pelvic floor muscle [PFM] and urinary tract, urinary continence mechanism, and bladder care), PFMT with the aid of vaginal palpation, and BT. Pelvic floor muscle training included Kegel exercise programme and neuromuscular re-education (the ‘knack’).15 Bladder training involved strategies to increase the time interval between voids by a combination of progressive void schedules, urge suppression, distraction, self-monitoring, and reinforcement.
 
Four stages of Kegel’s PFMT programme were adopted in this study, including (1) muscle awareness, (2) strengthening, (3) endurance, and (4) habit building and muscle utilisation.16 The exercise regimen was designed to progressively strengthen both type I and type II muscle fibres of the pelvic floor. In the first 2 weeks (muscle awareness phase), one set of 10 (week 1) and 15 (week 2) slow submaximal contractions for 5 seconds each and five fast maximal contractions with a 10-second relaxation between contractions was performed in lying down position. In the strengthening phase (weeks 3 to 4), the muscle-strengthening element was reinforced by gradually increasing the number of submaximal contractions to 25 per session with an increment of five repetitions per week in gravity-dependent position including sitting and standing. The number of fast maximal contractions (5) remained the same as in the awareness phase. In the endurance phase (weeks 5 to 8), the training was more focused on improving the performance of slow and sustained contractions of PFMs by increasing the contraction time to 10 seconds with submaximal contraction while keeping the exercise position and number of both submaximal and maximal contractions as in week 4. In the habit-building and muscle utilisation phase (weeks 9 to 12), the learnt neuromuscular re-education technique (the ‘knack’) and urge suppression strategies were reinforced. In this period, one set of 30 slow submaximal contractions for 10 seconds each and 10 fast maximal contractions with a 10-second relaxation between contractions was practised. Participants were asked to perform three sets of the above-mentioned exercise at specific periods as part of the home programme.
 
The control group was given advice and received an educational pamphlet with information about management of UI at baseline. Participants were given an appointment for a follow-up visit in 12 weeks.
 
Outcome measures
A number of indicators were employed to assess different aspects of outcomes. First, the number of UI episodes in the previous 7 days (UI7) was examined using a weekly bladder diary log sheet, which was also the primary outcome measure of this study. Information for UI7 was collected at baseline and then weekly until the end of the programme (12 weeks). Second, a validated condition-specific QoL assessment tool—Incontinence Impact Questionnaire Short Form (IIQ-7) Chinese (Taiwan) version17—was used to study the impact of UI on QoL and its change with the intervention after minor modifications were made to align with the local culture. Content validation was made by a panel of doctors who reviewed the instrument and determined if the questions satisfied the content domain. Seven items were included in the questionnaire to examine if the subjects were suffering from urine leakage under those specific situations. The questionnaire was administered by the same physiotherapist and the subjects were asked to choose the most appropriate response to those 7 items on a 4-point ordinal scale, with 0 meaning “not at all affected”, 1 “slightly affected”, 2 “moderately affected”, and 3 “greatly affected”. The maximum score of 21 indicated a great impact of UI on QoL. Third, subjective perception of improvement was assessed with a 10-cm visual analogue scale (VAS) rated from 0 to 10, with 0 suggesting “no improvement” and 10 “complete relief” at the end of the intervention period. Fourth, another VAS was used to assess subjects’ satisfaction to treatment (on a 0 to 10 rating), with 0 being “totally dissatisfied” and 10 “totally satisfied”. The IIQ-7 was collected at baseline and at the end of the programme. Compliance with treatment in the intervention group was evaluated from two perspectives: attendance and compliance with home exercises. Attendance was reviewed by calculating the proportion of sessions that were attended by an individual. Compliance with home exercises was also reviewed by calculating the reported frequency of exercises being executed. Any drawbacks and adverse effects during the intervention period were also monitored.
 
Statistical analysis
The sample size calculation based on the power estimation and results of a similar study,18 with a power of 0.8 and α = 0.05 and an attrition rate of 30%, revealed that a recruitment sample of 60 participants (30 participants in each group) was required. Statistical analysis was performed with the Statistical Package for the Social Sciences (Windows version 13.0; SPSS Inc, Chicago [IL], US). Any missing data were treated with “the last observation carried forward” approach. Independent t tests (for parametric data) or Mann-Whitney test (for nonparametric or non-normally distributed data) and Chi squared tests (for nominal/ordinal data) were used to compare the demographic data and outcome variables between the intervention and control groups at baseline. Pairwise comparisons with the P value adjusted using Bonferroni correction was used to examine for the differences in UI7 between week 1 and the subsequent time points during the intervention period (eg between week 1 and 2, week 1 and 3, etc). The results were presented as mean (standard deviation [SD]).
 
Results
The demographics and baseline measurements for the participants are shown in Table 1. Of the 60 participants recruited for screening, three did not turn up for assessment, one declined to participate in the study, and one was excluded due to impaired mental status (Fig 1). The majority of the participants were in their 70s with a mean (± SD) age of 74.3 ± 4.6 years. There was no significant difference between the groups in terms of age, BMI, parity, education level, mental status (CMMSE), and the characteristics of UI at baseline.
 

Table 1. Characteristics of subjects at baseline
 

Figure 1. CONSORT flowchart indicating flow of subjects in the study
 
Number of urinary incontinence episodes in 7 days
There was a significant interaction between time and groups in UI7 (F(1,53) = 33.14; P<0.001). A significant reduction in UI7 was noted only in the intervention group. There was significant difference between the control and intervention groups (t = –5.3; P<0.001) at 12 weeks with a mean difference of -6.4 (95% confidence interval [CI], -8.9 to -3.9). The mean numbers of UI7 for intervention and control groups were 1.0 ± 1.9 (95% CI, 0.3-1.7) and 7.4 ± 6.2 (95% CI, 5.0-9.8), respectively, at 12 weeks (Table 2). When comparing the percentage reduction ([pre-treatment frequency – post-treatment frequency] / [pre-treatment frequency] x 100%) in UI7, the intervention group demonstrated a mean of more than 90% reduction versus 7.2% in the control group. A similar trend of improvement was shown in subjects with SUI, UUI, and MUI; however, statistical analysis was not performed due to insufficient power (Fig 2). Post-hoc pairwise comparisons (a total of 11) suggested a significant improvement from week 1 to week 5 and onwards (P<0.001) in the intervention group.
 

Table 2. Results of outcome measures in the two groups before and after treatment
 

Figure 2. Progress of urinary incontinence episodes in the previous 7 days in subjects in the intervention group with various types of incontinence
 
Incontinence Impact Questionnaire Short Form modified Chinese (Taiwan) version
A significant interaction between time and groups was noted (F(1,53) = 54.56; P<0.001). As IIQ-7 was non-normally distributed, Mann-Whitney test was used and revealed a significant reduction of IIQ-7 (ie improvement in QoL) only in the intervention group (P<0.001), with a mean difference of -3.9 (95% CI, -5.1 to -2.8) at 12 weeks (Table 2).
 
Perception of improvement, treatment satisfaction, attendance, exercise compliance, and attrition rate
The results of the subjective perception of improvement and level of satisfaction with treatment after 12 weeks are shown in Table 2. The majority of the participants in the intervention group were satisfied with the interventions and perceived a subjective improvement. The mean attendance and exercise compliance rates in the intervention group were 97.7% ± 5.0% and 99.4% ± 1.9%, respectively. The attrition rate was zero in both groups during the whole study period. No adverse effect or discomfort was reported during the intervention period.
 
Discussion
In line with results from previous studies, this study further confirms that PFMT is an effective and safe treatment for women suffering from various types of UI. However, we observed a mean of >90% reduction in UI episodes which is noticeably higher than that reported in previous similar studies (32% to 73% reduction).18 19 Although recommendation of PFMT for women with UI is strongly supported by previous research findings,7 the best approach of PFMT programme continues to remain unclear.11 One possible explanation for the superior outcome in this study might be the combination of a few effective features in this programme, including programme duration of 12 weeks with gradual exercise progression, combination of PFMT and BT, manual vaginal palpation, and one-on-one supervised training session. First, the design of this UCPP incorporated some important concepts of exercise therapy. The training period of this study was 12 weeks, which could optimise the effect of the neural adaptation (recruitment of efficient motor units and frequency of excitation) and muscle hypertrophy according to the recommendations made by the American College of Sports Medicine.20 In addition, a recent systematic review7 also revealed that implementation of PFMT programme for at least 3 months (ie around 12 weeks) is more likely to result in greater treatment effect versus that lasting for <12 weeks. This programme also adopted the concepts raised by Kegel16 in which the progression of the exercise regimen is designed according to different stages, namely muscle awareness, strengthening, endurance, habit building, and muscle utilisation. Previous studies suggested a negative correlation between increased PFM strength and UI symptoms.21 22 The improvement of UI7 at the end of the UCPP might be a direct result of the muscle training programme, although PFM strength was not measured in this study. Theoretically, skeletal muscle strengthening should be facilitated by using additional resistance20 and, therefore, it is questionable whether muscle strength could be increased by UCPP using only maximal voluntary contractions. However, a previous study has indicated muscle strength improvement with daily practice of voluntary PFM contraction without resistance,20 and absence of extra improvement in UI patient groups with intravaginal resistance as compared with the group without resistance.23
 
Second, a combination of PFMT and BT was used in this UCPP. Although PFMT has been recommended as first-line management, even for women with UUI and MUI,7 there is some evidence suggesting superior outcome with combined PFMT and BT in this population.9 The result of this study confirmed these suggestions as a similar pattern of improvement was observed across the three groups (UUI, SUI, and MUI), although statistical analysis of the difference between subgroups was not performed due to the small sample size.
 
Third, vaginal palpation was used to facilitate and ensure correct PFM contraction. It was reported that approximately 30% of women are unable to perform isolated pelvic floor contractions with only written or verbal instructions.21 We consider the extra proprioceptive cue and specific verbal feedback are an integral part of the PFMT, and consider these to play a crucial role, especially in the initial (muscle awareness) stage. Ensuring feedback may also increase exercise adherence and compliance, apart from improving the treatment outcomes.
 
Fourth, the exercise sessions were conducted on a one-on-one basis for 30 minutes each. It has been reported that the amount of contact with health care professionals is positively correlated with reported cure and improvement (eg perception of change and incontinence-specific QoL) in patients with UI.11 It is argued that women receiving more attention may overestimate their improvement to please the treatment provider (ie experimenter effect),11 and it is strongly suggested to include more ‘objective’ data such as leakage episode outcomes in all PFMT trials. The result of this study revealed improvement in UI7 as well as other subjective measures (IIQ-7, perception of improvement, and treatment satisfaction), which could be considered as additional evidence base.11
 
The subjects’ compliance with the treatment programme was excellent, as reflected by the high compliance rate with exercise regimens, high attendance rate, and zero dropouts; the dropout rates reported in previous studies were relatively high, ranging from 12% to 41%.6 18 24 A possible explanation for such good compliance might be the significant improvement in the early stage of the protocol, which in turn increased the participants’ motivation for and confidence in adhering with the PFMT programme. Regular meetings (weekly or bi-weekly) with the same physiotherapist, who offered continuity of care, could be another possible explanation for the favourable compliance.
 
Study limitations
The main limitations in this study were: (1) potential selection bias due to use of convenience sampling, (2) absence of assessor blinding, (3) possibility of over-reporting, and (4) the use of the modified IIQ-7 Chinese (Taiwan) version. It has been acknowledged that convenient sampling might not be representative of the whole population suffering from UI. On the other hand, our subject group might have similar care-seeking behaviour as the client group in clinical practice. Although statistically insignificant, the data showed a small difference in some aspects of the demographic characteristics. In general, the control group tended to be slightly older (75.4 years vs 73.0 years), more likely to be illiterate, and have milder severity, and shorter duration of UI versus the intervention group. As these slight differences in the demographics might induce confounding, their effects warrant further investigation. Nevertheless, interpretation of the results of this study deserves some caution. In this study, five participants recruited for screening did not join the programme due to various reasons (one denied, three failed to turn up, and one due to impaired mental status). Although the specific reason for the absence of three participants was not investigated, the possibility of self-selection bias should be considered. In addition, all involved parties (the assessor, treatment provider, and the participants) were not blinded to the intervention, as opposed to the ideal experimental setup. However, it is widely acknowledged that given the nature of the treatment programme, it is difficult and often impossible to blind the treatment provider and participants during treatment.11 Due to resource limitation, it was not possible to include an independent, blinded assessor for outcome assessment. We were well aware of the possible ‘experimenter effect’, and therefore used UI7 as our primary outcome measure which is considered a more objective measure to minimise the possible effect of over-reporting of subjective improvement,11 although its ‘objectivity’ remains controversial. In addition, a significant proportion of participants (approximately 44%) required assistance for completing the outcome questionnaires due to illiteracy. This could possibly lead to over-reporting of improvement. Furthermore, the possibility of over-reporting of compliance by participants using self-reported weekly exercise diary should not be overlooked. There is, however, no better measure available to monitor the performance of this type of exercise accurately. A recent randomised controlled trial25 reported that severity of SUI symptoms at baseline and extent of PFM strength improvement, rather than exercise adherence, were correlated with symptom reduction for women with SUI. The result suggested a complex interaction between subject’s health condition, exercise compliance and treatment effectiveness, which warrant further investigation. Therefore, we intended not to account the improvement observed in our intervention group to the high self-reported compliance rate. Finally, a modified Chinese (Taiwan) version of IIQ-7 was used in this study. We are aware of the fact that this version has not been properly validated. However, we do not believe this affects our results as the modification was minor and the version was highly comparable with the Hong Kong version which was validated subsequent to the current study.
 
This study examined the immediate effectiveness of the verbally instructed UCPP just after cessation of supervised training. No follow-up data were collected. It is recommended that the long-term effectiveness of UCPP be explored, especially in the light of fairly extensive literature which reported poor long-term adherence and relapse at 3 to 5 years following pelvic floor rehabilitation programme.26
 
Conclusions
This study demonstrated that a structured 12-week programme of PFMT with gradual exercise progression, BT with urgency suppression, and enhanced education is likely to improve episodes of urinary leakage and QoL in older Chinese women with various kinds of UI in a community setting.
 
References
1. Abrams P, Cardozo L, Fall M, et al. The standardisation of terminology of lower urinary tract function: report from the Standardisation Sub-committee of the International Continence Society. Neurourol Urodyn 2002;21:167-78. CrossRef
2. Klausner AP, Vapnek JM. Urinary incontinence in the geriatric population. Mt Sinai J Med 2003;70:54-61.
3. Pang MW, Leung HY, Chan LW, Yip SK. The impact of urinary incontinence on quality of life among women in Hong Kong. Hong Kong Med J 2005;11:158-63.
4. Cheung RY, Chan S, Yiu AK, Lee LL, Chung TK. Quality of life in women with urinary incontinence is impaired and comparable to women with chronic diseases. Hong Kong Med J 2012;18:214-20.
5. Aguilar-Navarro S, Navarrete-Reyes AP, Grados-Chavarria BH, Garcia-Lara JM, Amieva H, Avila-Funes JA. The severity of urinary incontinence decreases health-related quality of life among community-dwelling elderly. J Gerontol A Biol Sci Med Sci 2012;67:1266-71. CrossRef
6. Fan HL, Chan SS, Law TS, Cheung RY, Chung TK. Pelvic floor muscle training improves quality of life of women with urinary incontinence: a prospective study. Aust N Z J Obstet Gynaecol 2013;53:298-304. CrossRef
7. Dumoulin C, Hay-Smith J. Pelvic floor muscle training versus no treatment, or inactive control treatments, for urinary incontinence in women. Cochrane Database Syst Rev 2010;(1):CD005654.
8. Pereira VS, de Melo MV, Correia GN, Driusso P. Long-term effects of pelvic floor muscle training with vaginal cone in post-menopausal women with urinary incontinence: a randomized controlled trial. Neurourol Urodyn 2013;32:48-52. CrossRef
9. Wallace SA, Roe B, Williams K, Palmer M. Bladder training for urinary incontinence in adults. Cochrane Database Syst Rev 2004;(1):CD001308.
10. Kim H, Yoshida H, Suzuki T. The effects of multidimensional exercise treatment on community-dwelling elderly Japanese women with stress, urge, and mixed urinary incontinence: a randomized controlled trial. Int J Nurs Stud 2011;48:1165-72. CrossRef
11. Hay-Smith EJ, Herderschee R, Dumoulin C, Herbison GP. Comparisons of approaches to pelvic floor muscle training for urinary incontinence in women. Cochrane Database Syst Rev 2011;(12):CD009508.
12. Lagro-Janssen TL, Debruyne FM, Smits AJ, van Weel C. Controlled trial of pelvic floor exercises in the treatment of urinary stress incontinence in general practice. Br J Gen Pract 1991;41:445-9.
13. Chiu HF, Lee HC, Chung WS, Kwong PK. Reliability and validity of the Cantonese version of Mini-Mental State Examination: a preliminary study. J Hong Kong Coll Psych 1994;4:25-8.
14. World Medical Association. World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA 2013;310:2191-4. CrossRef
15. Miller JM, Sampselle C, Ashton-Miller J, Hong GR, DeLancey JO. Clarification and confirmation of the Knack maneuver: the effect of volitional pelvic floor muscle contraction to preempt expected stress incontinence. Int Urogynecol J Pelvic Floor Dysfunct 2008;19:773-82. CrossRef
16. Kegel AH. Physiologic therapy for urinary stress incontinence. J Am Med Assoc 1951;146:915-7. CrossRef
17. Tsai CH. The effectiveness of a pelvic floor muscle rehabilitation program in managing urinary tract incontinence among Taiwanese middle age and older women [dissertation]. Pittsburgh: University of Pittsburgh; 2001.
18. Castro RA, Arruda RM, Zanetti MR, Santos PD, Sartori MG, Girao MJ. Single-blind, randomized, controlled trial of pelvic floor muscle training, electrical stimulation, vaginal cones, and no active treatment in the management of stress urinary incontinence. Clinics (Sao Paulo) 2008;63:465-72. CrossRef
19. Zahariou A, Karamouti M, Georgantzis D, Papaioannou P. Are there any UPP changes in women with stress urinary incontinence after pelvic floor muscle exercises? Urol Int 2008;80:270-4. CrossRef
20. Garber CE, Blissmer B, Deschenes MR, et al. American College of Sports Medicine position stand. Quantity and quality of exercise for developing and maintaining cardiorespiratory, musculoskeletal, and neuromotor fitness in apparently healthy adults: guidance for prescribing exercise. Med Sci Sports Exerc 2011;43:1334-59. CrossRef
21. Bø K. Pelvic floor muscle strength and response to pelvic floor muscle training for stress urinary incontinence. Neurourol Urodyn 2003;22:654-8. CrossRef
22. Dannecker C, Wolf V, Raab R, Hepp H, Anthuber C. EMG-biofeedback assisted pelvic floor muscle training is an effective therapy of stress urinary or mixed incontinence: a 7-year experience with 390 patients. Arch Gynecol Obstet 2005;273:93-7. CrossRef
23. Herbison GP, Dean N. Weighted vaginal cones for urinary incontinence. Cochrane Database Syst Rev 2013;(7):CD002114.
24. Hay-Smith EJ, Bø Berghmans LC, Hendriks HJ, de Bie RA, van Waalwijk van Doorn ES. Pelvic floor muscle training for urinary incontinence in women. Cochrane Database Syst Rev 2001;(1):CD001407.
25. Hung HC, Chih SY, Lin HH, Tsauo JY. Exercise adherence to pelvic floor muscle strengthening is not a significant predictor of symptom reduction for women with urinary incontinence. Arch Phys Med Rehabil 2012;93:1795-800. CrossRef
26. Bø K, Hilde G. Does it work in the long term?—A systematic review on pelvic floor muscle training for female stress urinary incontinence. Neurourol Urodyn 2013;32:215-23. CrossRef
 
Find HKMJ in MEDLINE:
 

Comparison between fluorescent in-situ hybridisation and array comparative genomic hybridisation in preimplantation genetic diagnosis in translocation carriers

Hong Kong Med J 2015 Feb;21(1):16–22 | Epub 24 Oct 2014
DOI: 10.12809/hkmj144222
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Comparison between fluorescent in-situ hybridisation and array comparative genomic hybridisation in preimplantation genetic diagnosis in translocation carriers
Vivian CY Lee, FHKAM (Obstetrics and Gynaecology); Judy FC Chow, MPhil; Estella YL Lau, PhD; William SB Yeung, PhD; PC Ho, MD; Ernest HY Ng, MD
Department of Obstetrics and Gynaecology, The University of Hong Kong, Queen Mary Hospital, Pokfulam, Hong Kong
Corresponding author: Dr Vivian CY Lee (v200lee@hku.hk)
 Full paper in PDF
Abstract
Objectives: To compare the pregnancy outcome of the fluorescent in-situ hybridisation and array comparative genomic hybridisation in preimplantation genetic diagnosis of translocation carriers.
 
Design: Historical cohort.
 
Setting: A teaching hospital in Hong Kong.
 
Patients: All preimplantation genetic diagnosis treatment cycles performed for translocation carriers from 2001 to 2013.
 
Results: Overall, 101 treatment cycles for preimplantation genetic diagnosis in translocation were included: 77 cycles for reciprocal translocation and 24 cycles for Robertsonian translocation. Fluorescent in-situ hybridisation and array comparative genomic hybridisation were used in 78 and 11 cycles, respectively. The ongoing pregnancy rate per initiated cycle after array comparative genomic hybridisation was significantly higher than that after fluorescent in-situ hybridisation in all translocation carriers (36.4% vs 9.0%; P=0.010). The miscarriage rate was comparable with both techniques. The testing method (array comparative genomic hybridisation or fluorescent in-situ hybridisation) was the only significant factor affecting the ongoing pregnancy rate after controlling for the women’s age, type of translocation, and clinical information of the preimplantation genetic diagnosis cycles by logistic regression (odds ratio=1.875; P=0.023; 95% confidence interval, 1.090-3.226).
 
Conclusion: This local retrospective study confirmed that comparative genomic hybridisation is associated with significantly higher pregnancy rates versus fluorescent in-situ hybridisation in translocation carriers. Array comparative genomic hybridisation should be the technique of choice in preimplantation genetic diagnosis cycles in translocation carriers.
 
New knowledge added by this study
  •  Fluorescence in-situ hybridisation (FISH) has been widely used in preimplantation genetic diagnosis (PGD) in translocation carriers. However, array comparative genomic hybridisation (aCGH) has largely replaced FISH since its development due to the advantages of testing all 24 chromosomes and improved pregnancy rates. This is the first study to show the use of aCGH in Hong Kong. Compared with FISH, aCGH was associated with significantly higher rate of ongoing pregnancy in translocation carriers (both reciprocal and Robertsonian translocations).
Implications for clinical practice or policy
  • Array CGH should be the technique of choice for PGD in translocation carriers.
 
 
Introduction
Since the report of first live-birth after preimplantation genetic diagnosis (PGD) published in 1990,1 more than 21 000 cycles have been performed worldwide, based on the data from ESHRE (European Society of Human Reproduction and Embryology) PGD consortium in the past two decades.2 Fluorescent in-situ hybridisation (FISH) has been used for PGD in translocation carriers. This technique uses chromosome-specific DNA probes in metaphase chromosomes or interphase nuclei. For PGD in translocation carriers, the usual approach is to use commercially available centromeric, locus-specific and subtelomeric probes depending on the translocated segments.3
 
However, FISH itself carries technical difficulties of fixation and spreading of nucleus, with the reported error rate of 7% to 10%.4 5 6 Another problem is that in translocation carriers, there is interchromosomal effect so that the proportion of embryos having aneuploidies is higher than those without translocations.7 Segmental loss or gain is also a frequent event in human embryos.8 9 Fluorescent in-situ hybridisation would not be able to detect these chromosomal abnormalities, which could be the cause of low success rates of PGD in translocation carriers as most of these embryos would result in implantation failure or miscarriages.10
 
With the development of comparative genomic hybridisation (CGH), it is possible to detect abnormalities in all 24 chromosomes and its application on single blastomere biopsy was first reported in 1996.11 Comparative genomic hybridisation is a DNA-based technique, employing comparative hybridisation of differentially labelled DNA samples to normal metaphase chromosome on a microscope slide.4 The ratio of fluorescence reveals the gain or loss of the tested samples. However, the turnover time is about 4 days, which does not fit into the strict time frame of treatment for PGD, and cryopreservation of embryos is mandatory, unless polar body biopsy is used.12 Array CGH (aCGH), employing DNA probes affixed directly to a microscope slide, solves this problem as the turnover time is about a day, which makes fresh transfer after blastomere biopsy or trophectoderm biopsy possible.13 It has been demonstrated that using aCGH in translocation carriers is beneficial.9
 
Our centre used the FISH technique for translocation carriers since our team developed the technique of PGD in 2001 which resulted in the first live-birth in Hong Kong.14 We acquired the platform of aCGH in April 2012. This retrospective analysis aimed to compare the pregnancy outcomes using FISH and aCGH for the treatment cycles of PGD in translocation carriers.
 
Methods
Study population
Data from all treatment cycles performed for PGD in the Department of Obstetrics and Gynaecology, Queen Mary Hospital/The University of Hong Kong from 2001 till 2013 June were retrieved. Only PGD cycles in translocation carriers were included in the present study, which was approved by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster.
 
Treatment regimen
The details of the long protocol of ovarian stimulation regimen, gamete handling, cryopreservation of embryos, and frozen embryo transfer have been previously described.15 The details of PGD have also been previously described.16 In short, embryo biopsy was performed on day 3 at 6-to-8-cell stage. Two blastomeres were tested from 2001 to 2005 and one blastomere was routinely tested from 2006 onwards. The blastomere was fixed for FISH analysis. Commercially available FISH probes were chosen to flank the break point. For aCGH, the blastomere was transferred into a polymerase chain reaction tube and whole genome amplification was performed (SurePlex; BlueGnome, Cambridge, UK). Array CGH was performed using 24sure V3 (BlueGnome) for Robertsonian translocation carrier or 24sure+ (BlueGnome) for reciprocal translocation carrier. All results were interpreted separately by two laboratory staff.
 
Outcome measures
The primary outcome measures of the study were clinical and ongoing pregnancy rates. Clinical pregnancies were defined by the presence of one or more gestation sacs or the histological confirmation of gestational product in case of early pregnancy failures. Ongoing pregnancies were those pregnancies beyond 8 to 10 weeks of gestation, at which stage the patients were referred for antenatal care. The secondary outcome measures were miscarriage rate and cancellation rate. Cancellation rate was defined as the percentage of treatment cycles with no embryo transfer after oocyte retrieval.
 
Statistical analysis
The Kolmogorov-Smirnov test was used to test the normal distribution of continuous variables. Results of continuous variables were expressed as mean ± standard deviation if normally distributed, and median (range) if not normally distributed. Statistical comparison was carried out by Student’s t test, Mann-Whitney U test, and/or Wilcoxon signed rank test for continuous variables and Chi squared test or Fisher’s exact test for categorical variables, as appropriate. Statistical analysis was performed using the Statistical Package for the Social Sciences (Windows version 20.0; SPSS Inc, Chicago [IL], US). The two-tailed value of P<0.05 was considered statistically significant. Binary logistic regression using enter method was used to calculate the prediction of the pregnancy rate in PGD cycles.
 
Results
There were 339 PGD cycles, of which 101 treatment cycles were performed in translocation carriers during the study period: 77 cycles for reciprocal translocation and 24 cycles for Robertsonian translocation. The two techniques, FISH and aCGH, were used in 78 and 11 cycles, respectively (Table 1). The overall cancellation rate was 39.6% (40/101). Four cycles were cancelled due to high risk of ovarian hyperstimulation syndrome; eight cycles due to poor ovarian responses or poor embryo qualities; and 28 cycles due to no normal embryo after PGD with either technique (Table 1). The cancellation rate using FISH technique due to abnormal signals for all embryos was significantly higher than that using aCGH (34.6% vs 9.1%, respectively).
 

Table 1. Information on preimplantation genetic diagnosis cycles
 
The demographic and clinical data of women who underwent PGD with FISH and aCGH are presented in Table 2. Women in the aCGH group were significantly younger than those in the FISH group, and the serum oestradiol concentration on ovulation trigger day in the aCGH group was significantly higher than that in the FISH group. The total dosage of gonadotropin, the number of follicles larger than or equal to 16 mm, and the number of oocytes retrieved were comparable between the two groups. The demographic and clinical data of cycles for couples with reciprocal and Robertsonian translocation were all comparable (data not shown).
 

Table 2. Demographic and clinical data of subjects included in treatment cycles for preimplantation genetic diagnosis using fluorescent in-situ hybridisation and array comparative genomic hybridisation
 
The pregnancy rates per cycle and per transfer were all significantly higher in cycles performed using aCGH. The miscarriage rates were similar between the two groups (Table 3). A subgroup analysis of cycles performed from 2006 to 2013 showed similar results in all the above comparisons with significantly higher clinical and ongoing pregnancy rates per initiated cycle and per transfer in cycles using aCGH than those using FISH, but with comparable miscarriage rates (data not shown). Figures 1 and 2 show PGD results with FISH and aCGH, respectively.
 

Table 3. Pregnancy rates
 

Figure 1. Preimplantation genetic diagnosis by fluorescent in-situ hybridisation
 

Figure 2. Results of array comparative genomic hybridisation (aCGH)
 
Logistic regression revealed that the method of testing (FISH or aCGH) was the only factor that significantly affected the ongoing pregnancy rate; age of the women, the type of translocation, or other clinical information including number of oocytes retrieved, the gonadotropin dosage used, and the oestradiol concentration on the day of human chorionic gonadotropin administration did not affect the outcome. The method of testing remained a significant factor after controlling for the age of women and type of translocation (Table 4).
 

Table 4. Logistic regression of variables associated with ongoing pregnancy rate for PGD in translocation carriers
 
Figure 3 shows the results of aCGH in embryos produced from reciprocal translocation carrier. Array CGH can detect segmental changes in translocated chromosomes (embryo 18) and other chromosomes (embryo 3). It can also detect whole chromosome aneuploidy (embryo 16). Embryo 7 was replaced and resulted in an ongoing pregnancy.
 

Figure 3. Array comparative genomic hybridisation (CGH) of embryo biopsy
The mother was a reciprocal translocation carrier, 46,XX,t(3;4)(q29;q32). Array CGH could detect segmental changes unrelated to translocation chromosomes (embryo 3), whole chromosome aneuploidy (embryo 16), and unbalanced reciprocal translocation (embryo 18)
 
Discussion
The present study showed that PGD using aCGH was associated with significantly higher pregnancy rates (both per initiated cycle or per embryo transfer) versus FISH. The testing method, ie using aCGH or FISH, was the only significant factor affecting the ongoing pregnancy rate in logistic regression.
 
Couples carrying balanced reciprocal or Robertsonian translocations are well-known to produce a high percentage of unbalanced gametes and embryos,17 resulting in high miscarriage rates and a variable chance of unbalanced offspring with multiple congenital anomalies and mental retardation.18 The high percentage of unbalanced gametes can be explained by the segregation modes and behaviours of translocations during meiosis.19 Not only the direct effect of the translocations on the meiosis, but also the interchromosomal effect exerted by the translocations increases the percentage of aneuploidies in the gametes and embryos of couples carrying translocations.7 20 21 22 It further decreases the number of normal/balanced euploid embryos, including those suitable and feasible for transfer. It was reported that only up to 16% of preimplantation embryos were normal/balanced and euploid in translocation carriers.9
 
In the past decade, FISH was commonly employed to detect the unbalanced chromosome rearrangement of embryos using probes depending on the translocated segments.23 Fluorescent in-situ hybridisation is technically challenging, especially with regard to fixation and spreading.3 5 24 The error rate of FISH was reported to be up to 10%.5 6 25 As PGD using FISH in translocation carriers only employs fluorescent DNA probes for the translocated segments, aneuploidies and segmental rearrangements which are not related to the translocated segments will be missed.3 Even in aneuploidy screening, only up to five chromosomes can be tested in one round of FISH, and so, usually up to half of all chromosomes can be tested in repeated rounds. However, repeated rounds were related to the decrease in diagnostic accuracy.4 Therefore, using FISH would miss a proportion of aneuploidies and abnormal embryos, which may result in misdiagnosis, implantation failure, or miscarriages.10 This is probably the major reason for the unfavourable results in a systematic review on the use of PGD in translocation carriers26 and the meta-analysis of preimplantation genetic screening.27 The cancellation rate, ie no embryo transfer after oocyte retrieval, was higher after FISH than that after aCGH, probably due to technical difficulties.
 
The development of CGH makes it possible to test for all 24 chromosomes, while the development of aCGH makes it feasible to use the technique in the restricted time frame of PGD. Several groups of investigators have reported success with using aCGH for PGD in translocation carrier couples to improve their reproductive outcomes9 28; we have shown similar results in this local study.
 
Figure 3 shows the result of PGD in a patient with reciprocal translocation. Array CGH detected unbalanced reciprocal translocated segments in embryo 18. It also picked other segmental changes (1q and 9q21.11-qter) not related to translocated chromosomes in embryo 3. It could also detect whole chromosome aneuploidy (monosomy 22) in embryo 16. In FISH, probes flanking the translocation breakpoints are used and, therefore, the abnormalities in embryo 3 and embryo 16 cannot be detected. Furthermore, the average probe density of aCGH used for Robertsonian translocation is 10 Mb while that of one used for reciprocal translocation is 5 Mb. Increase in resolution allows us to easily pick a small abnormality in the embryo. Array CGH offers a more comprehensive way of PGD in translocation carriers and this results in a significant increase in the pregnancy rate compared with FISH.
 
In our cohort, the age of women for whom aCGH was employed was younger than that of women for whom FISH was employed. This can probably explain the higher oestradiol concentration after ovarian stimulation of in-vitro fertilisation treatment, along with the non-significant, higher number of follicles and oocytes retrieved in the aCGH group. In order to reveal the effect of the testing method on pregnancy rate, we controlled the women’s age, type of translocation, and other data of the stimulation including the total dosage of gonadotropin and number of oocytes retrieved in multivariate logistic regression; the testing method remained the only significant factor affecting the ongoing pregnancy rate. This indicates that, after controlling for all the possible confounding factors, PGD cycles using aCGH were associated with a significantly higher ongoing pregnancy rate than those using FISH.
 
It has been controversial whether PGD can improve the reproductive outcomes compared with natural conception in translocation carriers. A systematic review reported adverse effects on the pregnancy rates after PGD in translocation carriers compared with natural conception.26 However, all the PGD cycles included in this review were performed with FISH. Moreover, the case reports and case series of PGD included had a small number of subjects; in 16 out of 21 studies, the sample size was only one to three cases. Larger systematic reviews on the use of aCGH in translocation carriers are urgently needed.
 
This study is retrospective in nature and there may be some confounding factors such as differences in embryo biopsy techniques and culture conditions which were not controlled for and which might have affected the pregnancy outcomes. As we started using aCGH approximately one and a half year ago, the number of cases was smaller than that using FISH. Despite the small sample size, the ongoing pregnancy rate revealed a significant increase after employing aCGH in translocation carriers. This serves to further strengthen our argument in favour of PGD programme using aCGH.
 
It is well known that two-blastomere biopsy is more detrimental to pregnancy than one-blastomere biopsy.22 Our team employed two-blastomere biopsy when we first developed our PGD programme. We then switched to one-blastomere biopsy in 2006. Therefore, a subgroup analysis was performed on those cycles between 2006 and 2013. The ongoing pregnancy rate per initiated cycle remained significantly higher in the group using aCGH than that using FISH.
 
Conclusion
Use of aCGH can improve the pregnancy outcomes of PGD in translocation carriers compared with FISH. Array CGH should be the technique of choice for PGD in translocation carriers.
 
References
1. Handyside AH, Kontogianni EH, Hardy K, Winston R. Pregnancies from biopsied human preimplantation embryos sexed by Y-specific DNA amplification. Nature 1990;344:768-70. CrossRef
2. Traeger-Synodinos J, Coonen E, Goossens V, et al. Session 09: ESHRE data reporting on PGD cycles and oocyte donation. Hum Reprod 2013;28(Suppl 1):i18-i19. CrossRef
3. DeUgarte CM, Li M, Surrey M, Danzer H, Hill D, DeCherney AH. Accuracy of FISH analysis in predicting chromosomal status in patients undergoing preimplantation genetic diagnosis. Fertil Steril 2008;90:1049-54. CrossRef
4. Wells D, Alfarawati S, Fragouli E. Use of comprehensive chromosomal screening for embryo assessment: microarrays and CGH. Mol Hum Reprod 2008;14:703-10. CrossRef
5. Velilla E, Escudero T, Munné S. Blastomere fixation techniques and risk of misdiagnosis for preimplantation genetic diagnosis of aneuploidy. Reprod Biomed Online 2002;4:210-7. CrossRef
6. Li M, DeUgarte CM, Surrey M, Danzer H, DeCherney A, Hill DL. Fluorescence in situ hybridization reanalysis of day-6 human blastocysts diagnosed with aneuploidy on day 3. Fertil Steril 2005;84:1395-400. CrossRef
7. Gianaroli L, Magli MC, Ferraretti AP, et al. Possible interchromosomal effect in embryos generated by gametes from translocation carriers. Hum Reprod 2002;17:3201-7. CrossRef
8. Vanneste E, Voet T, Le Caignec C, et al. Chromosome instability is common in human cleavage-stage embryos. CrossRef Nat Med 2009;15:577-83.
9. Fiorentino F, Spizzichino L, Bono S, et al. PGD for reciprocal and Robertsonian translocations using array comparative genomic hybridization. Hum Reprod 2011;26:1925-35. CrossRef
10. Scott RT Jr, Ferry K, Su J, Tao X, Scott K, Treff NR. Comprehensive chromosome screening is highly predictive of the reproductive potential of human embryos: a prospective, blinded, nonselection study. Fertil Steril 2012;97:870-5. CrossRef
11. Wells D, Delhanty J. Evaluating comparative genomic hybridisation (CGH) as a strategy for preimplantation diagnosis of unbalanced chromosome complements. Eur J Hum Genet 1996;4:125.
12. Wells D, Escudero T, Levy B, Hirschhorn K, Delhanty JD, Munné S. First clinical application of comparative genomic hybridization and polar body testing for preimplantation genetic diagnosis of aneuploidy. Fertil Steril 2002;78:543-9. CrossRef
13. Rubio C, Rodrigo L, Mir P, et al. Use of array comparative genomic hybridization (array-CGH) for embryo assessment: clinical results. Fertil Steril 2013;99:1044-8. CrossRef
14. Ng EH, Lau EY, Yeung WS, Lau ET, Tang MH, Ho PC. Preimplantation genetic diagnosis in Hong Kong. Hong Kong Med J 2003;9:43-7.
15. Ng EH, Yeung WS, Lau EY, So WW, Ho PC. High serum oestradiol concentrations in fresh IVF cycles do not impair implantation and pregnancy rates in subsequent frozen-thawed embryo transfer cycles. Hum Reprod 2000;15:250-5. CrossRef
16. Chow JF, Yeung WS, Lau EY, et al. Singleton birth after preimplantation genetic diagnosis for Huntington disease using whole genome amplification. Fertil Steril 2009;92:828.e7-10.
17. Munné S. Analysis of chromosome segregation during preimplantation genetic diagnosis in both male and female translocation heterozygotes. Cytogenet Genome Res 2005;111:305-9. CrossRef
18. Jalbert P, Sele B, Jalbert H. Reciprocal translocations: a way to predict the mode of imbalanced segregation by pachytene-diagram drawing. Hum Genet 1980;55:209-22. CrossRef
19. Scriven PN, Handyside AH, Ogilvie CM. Chromosome translocations: segregation modes and strategies for preimplantation genetic diagnosis. Prenat Diagn 1998;18:1437-49. CrossRef
20. Pellestor F, Imbert I, Andréo B, Lefort G. Study of the occurrence of interchromosomal effect in spermatozoa of chromosomal rearrangement carriers by fluorescence in-situ hybridization and primed in-situ labelling techniques. Hum Reprod 2001;16:1155-64. CrossRef
21. Douet-Guilbert N, Bris MJ, Amice V, et al. Interchromosomal effect in sperm of males with translocations: report of 6 cases and review of the literature. Int J Androl 2005;28:372-9. CrossRef
22. Machev N, Gosset P, Warter S, Treger M, Schillinger M, Viville S. Fluorescence in situ hybridization sperm analysis of six translocation carriers provides evidence of an interchromosomal effect. Fertil Steril 2005;84:365-73. CrossRef
23. Harper JC, Wilton L, Traeger-Synodinos J, et al. The ESHRE PGD Consortium: 10 years of data collection. Hum Reprod Update 2012;18:234-47. CrossRef
24. Munné S. Preimplantation genetic diagnosis of numerical and structural chromosome abnormalities. Reprod Biomed Online 2002;4:183-96. CrossRef
25. Munné S, Sandalinas M, Escudero T, Fung J, Gianaroli L, Cohen J. Outcome of preimplantation genetic diagnosis of translocations. Fertil Steril 2000;73:1209-18. CrossRef
26. Franssen MT, Musters AM, van der Veen F, et al. Reproductive outcome after PGD in couples with recurrent miscarriage carrying a structural chromosome abnormality: a systematic review. Hum Reprod Update 2011;17:467-75. CrossRef
27. Mastenbroek S, Twisk M, van der Veen F, Repping S. Preimplantation genetic screening: a systematic review and meta-analysis of RCTs. Hum Reprod Update 2011;17:454-66. CrossRef
28. Colls P, Escudero T, Fischer J, et al. Validation of array comparative genome hybridization for diagnosis of translocations in preimplantation human embryos. Reprod Biomed Online 2012;24:621-9. CrossRef
 
Find HKMJ in MEDLINE:
 

Improving the emergency department management of post-chemotherapy sepsis in haematological malignancy patients

Hong Kong Med J 2015 Feb;21(1):10–5 | Epub 10 Oct 2014
DOI: 10.12809/hkmj144280
© Hong Kong Academy of Medicine. CC BY-NC-ND 4.0
 
ORIGINAL ARTICLE
Improving the emergency department management of post-chemotherapy sepsis in haematological malignancy patients
HF Ko, MB, BS, FHKAM (Emergency Medicine)1; SS Tsui, APN1; Johnson WK Tse, APN, BSN HD (Nursing)1; WY Kwong, MB, ChB1; OY Chan, MB, BS2; Gordon CK Wong, MB, BS, FHKAM (Emergency Medicine)1
1 Accident and Emergency Department, Queen Elizabeth Hospital, Jordan, Hong Kong
2 Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong
 
Corresponding author: Dr HF Ko (frankhko@hotmail.com)
 Full paper in PDF
Abstract
Objective: To review the result of the implementation of treatment protocol for post-chemotherapy sepsis in haematological malignancy patients.
 
Design: Case series with internal comparison.
 
Setting: Accident and Emergency Department, Queen Elizabeth Hospital, Hong Kong.
 
Patients: Febrile patients presenting to the Accident and Emergency Department with underlying haematological malignancy and receiving chemotherapy within 1 month of Accident and Emergency Department visit between June 2011 and July 2012. Similar cases between June 2010 and May 2011 served as historical referents.
 
Main outcome measures: The compliance rate among emergency physicians, the door-to-antibiotic time before and after implementation of the protocol, and the impact of the protocol on Accident and Emergency Department and hospital service.
 
Results: A total of 69 patients were enrolled in the study. Of these, 50 were managed with the treatment protocol while 19 patients were historical referents. Acute myeloid leukaemia was the most commonly encountered malignancy. Overall, 88% of the patients presented with sepsis syndrome. The mean door-to-antibiotic time of those managed with the treatment protocol was 47 minutes versus 300 minutes in the referent group. Overall, 86% of patients in the treatment group met the target door-to-antibiotic time of less than 1 hour. The mean lengths of stay in the emergency department (76 minutes vs 105 minutes) and hospital (11 days vs 15 days) were shorter in those managed with the treatment protocol versus the historical referents.
 
Conclusion: Implementation of the protocol can effectively shorten the door-to-antibiotic time to meet the international standard of care in neutropenic sepsis patients. The compliance rate was also high. We proved that effective implementation of the protocol is feasible in a busy emergency department through excellent teamwork between nurses, pharmacists, and emergency physicians.
 
 
New knowledge added by this study
  •  A well-written, easily available treatment protocol together with stocking of antibiotics in the emergency department can effectively shorten the door-to-antibiotic (DTA) time from 300 minutes to 47 minutes.
  •  In this study, 86% of patients met the target DTA time of less than 1 hour.
Implications for clinical practice or policy
  •  Orchestrated efforts between nurses, pharmacists, and physicians are crucial for implementation of the protocol in one of the busiest emergency departments in the region.
 
 
Introduction
Cancer patients receiving chemotherapy sufficient to cause myelosuppression and adverse effects on the integrity of gastro-intestinal mucosa are at high risk of invasive infections. Patients with profound, prolonged neutropenia are at particularly high risk of serious infections. Prolonged neutropenia is most likely to occur in patients undergoing induction chemotherapy for acute leukaemia. More than 80% of those with haematological malignancies will develop fever during more than one chemotherapy cycle.1 Since neutropenic patients are unable to mount a strong inflammatory response to infections, fever may be the only sign. Infection in neutropenic patients can progress rapidly, leading to serious complications and even death with a mortality rate ranging from 2% to 21%.2 3 It is critical to recognise neutropenic fever patients early and initiate empirical, broad-spectrum antibiotics. Major international guidelines advocate early administration of empirical antibiotics within 1 hour of emergency department (ED) presentation, sometimes even without cytological proof of neutropenia.4 5 6 7 However, management of febrile neutropenic patients varies across different EDs, and even among different physicians. A recent audit performed in the EDs of the United Kingdom showed that only 26% of the audited patients received intravenous antibiotics within the target time of 1 hour.8 Another study in French EDs showed that management of febrile neutropenia was inadequate and severity was under-evaluated in the critically ill.9 In order to improve and standardise the care of post-chemotherapy sepsis in haematological malignancy patients, the Accident and Emergency Department (A&E) and Department of Medicine of Queen Elizabeth Hospital (QEH) initiated a treatment protocol in 2011. This is the first hospital in Hong Kong to implement such a treatment protocol. It included febrile patients with haematological malignancy who had received chemotherapy within 1 month of ED visit. These patients were identified at triage station and provided with a fast-track consultation. The ED physician would verify the history and perform a thorough physical examination and targeted investigations. Empirical antibiotics were administered after taking appropriate culture samples aiming at a door-to-antibiotic (DTA) time of less than 1 hour (Fig).
 

Figure. Protocol for empirical antibiotic treatment in the emergency department for post-chemotherapy febrile haematological malignancy patients
 
Local publication on post-chemotherapy patients mainly focused on solid tumour patients, in-patient management and their outcomes.10 There is a paucity of literature concerning the initial ED management of haematological malignancy patients. The objective of this study was to examine the protocol compliance rate among ED physicians, the DTA time before and after implementation of the protocol, and the impact of the protocol on A&E and hospital services. It also serves to provide invaluable epidemiological data regarding the haematological malignancy patients in Hong Kong.
 
Methods
This is a before-and-after study of the impact of a protocol on the management of post-chemotherapy sepsis in haematological malignancy patients. A 2-year retrospective chart review was conducted. The first chart review was performed from June 2010 to May 2011. These patients were admitted through ED to the haematological ward prior to implementation of the protocol and served as historical referents. Data were retrieved from the admission book of the haematology ward. A diagnosis of post-chemotherapy fever or neutropenic fever was shortlisted. Cases that were admitted through ED were analysed. The second year started from June 2011. The intervention group included patients recruited in the protocol. There were two patients who fulfilled the inclusion criteria but were excluded from the study since they refused any investigation or treatment in ED despite explanation. The charts were reviewed by two emergency physicians and two senior nurses. Any discrepancy was resolved by discussion among investigators. The protocol was implemented on a 24-hour basis. According to the protocol, fever was defined by a single measurement of oral temperature of >38.3°C either at the triage station or self-reported at home. Neutropenia was defined as absolute neutrophil count (ANC) of <1 x 10-9 /L. Sepsis was defined by Bone criteria11 (ie >2 out of 4 of the following: leukocyte count <4 or >12 x 10-9 /L, respiratory rate >20/min, oral temperature >38°C or <35°C, pulse >90 beats/min). Door-to-antibiotic time was charted in the medical record. Lengths of stay in the A&E and hospital were retrieved from the Clinical Data Analysis and Reporting System. The primary outcome was mean DTA time. Secondary outcomes included compliance of the ED physician with the protocol, mean ED length of stay, mean hospital length of stay, and the adverse outcome rate. Adverse outcomes included occurrence of a serious medical complication or death during index admission; these criteria are commonly cited in oncology literature.12 Adverse outcome was charted from patients’ medical record during the index admission.
 
Chi squared tests were performed when comparing categorical parameters between the protocol and referent groups. Student’s t tests were performed for parametric variables. All statistical analyses were performed using the Statistical Package for the Social Sciences (Windows version 17; SPSS Inc, Chicago [IL], US). A P value of less than 0.05 was regarded as statistically significant.
 
The study was conducted in the A&E of QEH, Hong Kong, a tertiary referral centre for haematological malignancy patients. The A&E of QEH is an urban ED with a daily attendance of 500 and is one of the busiest EDs in Hong Kong. This study was approved by the chief of service of the department.
 
Results
A total of 69 patients were recruited; 19 patients were referents while 50 belonged to the protocol group. Baseline demographic data are shown in Table 1. Overall, 49% of the patients were male. Their mean age was 56 years (range, 20-81 years). Leukaemia was the most commonly encountered haematological malignancy, accounting for 51% of cases (n=35/69). Among these, acute myeloid leukaemia was the most prevalent subtype. Lymphoma was the second most common haematological malignancy, making up 42% (n=29/69) of the cases. The mean duration of the last chemotherapy dose to ED visit was 12 days in both groups of patients. At least one co-morbidity was present in 47% of patients in the referent group and in 52% of patients in the protocol group (P=0.18).
 

Table 1. Baseline characteristics of study patients
 
During the index ED visit, the mean door-to-consultation time was 15 and 12 minutes in referent group and protocol group, respectively (P=0.40). Overall, 88% (n=16/19 in referent and n=45/50 in protocol group) of the patients fulfilled the sepsis criteria; 64% (n=44/69) had ANC of <1 x 10-9 /L, although the result was not known at the time of consultation. All protocol group patients received antibiotics after blood cultures were taken during their ED stay compared to none in the control group. Tazobactam-piperacillin (Tazocin; Pfizer, Taiwan) was the most commonly prescribed antibiotic in ED. The mean DTA time in the protocol group was 47 minutes compared to 300 minutes in referent group (P<0.05). Overall, 86% (n=43/50) of protocol group patients could achieve the target DTA time of less than 1 hour (P<0.05). The shortest time required for antibiotic administration in the referent group was 70 minutes. The mean length of stay in ED was 105 minutes in the referent group versus 76 minutes in the protocol group (P=0.46). The major outcomes are shown in Table 2.
 

Table 2. Comparison of outcomes of patients with post-chemotherapy fever between the protocol and referent groups
 
The duration of fever, which was defined as oral temperature of >38°C for 24 hours, was 4 days in the referent group and 3 days in the protocol group (P=0.09). One patient from the referent group suffered from septic shock and required intensive care unit (ICU) admission; no patient from this group died. Six patients in the protocol group had adverse outcomes; three had septic shock requiring inotropic support, one of them required ICU admission, while three patients died during index admission. Adverse event rate was 5% in the referent group versus 14% in the protocol group (P=0.45). Overall, 25% (n=17/69) of patients had bacteraemia. Escherichia coli was recovered in five samples of which two were extended-spectrum beta-lactamase (ESBL)–producing bacteria. Streptococcus mitis was the second most common pathogen and was found in four samples. Overall, 43% (n=30/69) of the patients had microbiologically documented infection. The mean length of hospital stay was 15 days in the referent group compared with 11 days in the protocol group (P=0.15).
 
Discussion
Chemotherapy-induced sepsis is a medical emergency that requires urgent assessment and treatment with antibiotics. Our study shows that 88% of post-chemotherapy febrile patients fulfilled the sepsis criteria. Overall, 25% of patients had bacteraemia, a rate similar to that reported in the literature.4 Hence, prompt identification and early administration of broad-spectrum empirical antibiotics is the cornerstone of management. In a retrospective study of 2731 patients with septic shock (only 7% of whom were neutropenic), each hour delay in initiating effective antimicrobials decreased survival by around 8%.13 Another cohort study showed that the in-hospital mortality among adult patients with severe sepsis or septic shock decreased from 33% to 20% when time from triage to appropriate antimicrobial therapy was ≤1 hour compared with >1 hour.14 Our protocol suggested Tazocin as the first-line antibiotic, in accordance with the 2010 Infectious Diseases Society of America guideline.4 However, the rising trend of ESBL E coli infection may raise concern of antibiotic resistance. A larger-scale cohort study should be carried out to update the local microbiology prevalence and amend the empirical antibiotic recommendations accordingly.
 
Implementation of the protocol in our department could significantly reduce the mean DTA time from 300 minutes to 47 minutes (P<0.05). Furthermore, 86% of patients could achieve the target DTA time of <1 hour. The result was satisfactory when compared with similar studies conducted in Europe and North America where reported median DTA ranged from 154 minutes to 3.9 hours.15 16 17 Audits from the UK report that only 18% to 26% of patients receive initial antibiotic within the target DTA of 1 hour.8 According to the authors, the most common reasons for failure to comply with this time frame included failure to administer the initial dose of the empirical antibacterial regimen until the patient has been transferred from ED to the inpatient ward, prolonged time between arrival and clinical assessment, lack of awareness of the natural history of neutropenic fever syndrome and its evolution to severe sepsis and shock, failure of the ED to stock appropriate antibacterial medications, and non-availability of neutropenic fever protocols in the ED for quick reference.8 The last two points were further supported by studies. A chart review of 201 febrile neutropenic patients in Canada showed that the electronic clinical practice guideline could decrease the DTA time by 1 hour (3.9 hours vs 4.9 hours).16 Another retrospective observational study of timeliness of antibiotic administration in severely septic patients presenting to a US community ED showed that storing key antibiotics could decrease the mean DTA time by 70 minutes (167 minutes vs 97 minutes).18 The percentage of severely septic patients receiving antibiotics within 3 hours of arrival to the ED increased from 65% pre-intervention to 93% post-intervention.18 Before the implementation of this protocol, multiple briefing sessions were held with nurses and physicians to increase awareness about prompt treatment of post-chemotherapy fever. Antibiotics were stocked in the ED and were readily available. The protocol could be easily downloaded from the department website. Regular collaboration existed between the nursing manager and the pharmacist to replenish the antibiotic stock. Thus, successful implementation of the protocol involved a joint effort by different parties.
 
There was a trend towards reducing the duration of fever and length of hospital stay in the intervention group. Although this does not imply causation, especially in view of the small sample size, the correlation makes one ponder whether a delay in antibiotic delivery indeed increases the length of hospital stay. Similar correlation was demonstrated in a UK review.15 However, we could not demonstrate an impact on mortality and adverse outcome. The reason may partly be related to the small sample size, heterogeneous nature of haematological malignancies, and overall low incidence of mortality (4%) in our study as compared to 49.8% in-hospital mortality rate reported in an 11-year review.19 Our result shows that the length of ED stay was similar between control and intervention groups, thus, demonstrating that this protocol did not add further burden to the overcrowding ED.
 
This study has two limitations that need to be discussed. First, the study used a retrospective chart review design and there were inherent challenges with missing information and poor documentation. Second, prior to implementation of the protocol, the febrile haematological malignant patients were often instructed to either attend the day ward or A&E; this partly explains the relatively small case number in the control group. In addition, we relied on the diagnosis coding for case identification in the control group; some cases might have been missed as a result of error in coding. Even if they attended ED, the lack of awareness and reluctance of physicians to prescribe antibiotics led to a significant delay in administration of the first dose of antibiotic. Although the number of control cases was small, mean DTA time of 300 minutes echoed the same in a similar study performed overseas.16 Efforts were made to use accepted chart review methods to assess outcomes that were automatically recorded in the electronic ED information systems, and to examine the nurse records of antibiotic administration.
 
Conclusion
Implementation of a treatment protocol in post-chemotherapy febrile haematological malignancy patients can significantly shorten the mean DTA time to <1 hour, which is now the standard of care worldwide. The key to effective implementation lies in orchestration of efforts between administrators, physicians, nurses, and pharmacists. We can prove that the protocol is feasible even in a busy urban ED.
 
Acknowledgement
We would like to thank Ms Kelly Choy for statistical analysis.
 
Declaration
No conflicts of interests were declared by authors.
 
References
1. Klastersky J. Management of fever in neutropenic patients with different risks of complications. Clin Infect Dis 2004;39 Suppl 1:S32-7. CrossRef
2. Smith TJ, Khatcheressian J, Lyman GH, et al. 2006 Update of recommendations for the use of white cell growth factors: an evidence-based clinical practice guideline. J Clin Oncol 2006;24:3187-205. CrossRef
3. Herbst C, Naumann F, Kruse EB, et al. Prophylactic antibiotics or G-CSF for the prevention of infections and improvement of survival in cancer patients undergoing chemotherapy. Cochrane Database Syst Rev 2009;(1):CD007107.
4. Freifeld AG, Bow EJ, Sepkowitz KA, et al. Clinical practice guideline for the use of antimicrobial agents in neutropenic patients with cancer: 2010 update by the Infectious Diseases Society of America. Clin Infect Dis 2011;52:e56-93. CrossRef
5. Dellinger RP, Levy MM, Rhodes A, et al. Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med 2013;41:580-637. CrossRef
6. Bate J, Gibson F, Johnson E, et al. Neutropenic sepsis: prevention and management of neutropenic sepsis in cancer patients (NICE Clinical Guideline CG151). Arch Dis Child Educ Pract Ed 2013;98:73-5. CrossRef
7. Flowers CR, Seidenfeld J, Bow EJ, et al. Antimicrobial prophylaxis and outpatient management of fever and neutropenia in adults treated for malignancy: American Society of Clinical Oncology clinical practice guideline. J Clin Oncol 2013;31:794-810. CrossRef
8. Clarke RT, Warnick J, Stretton K, Littlewood TJ. Improving the immediate management of neutropenic sepsis in the UK: lessons from a national audit. Br J Haematol 2011;153:773-9. CrossRef
9. Andr&eacute S, Taboulet P, Elie C, et al. Febrile neutropenia in French emergency departments: results of a prospective multicentre survey. Crit Care 2010;14:R68. CrossRef
10. Hui EP, Leung LK, Poon TC, et al. Prediction of outcome in cancer patients with febrile neutropenia: a prospective validation of the Multinational Association for Supportive Care in Cancer risk index in a Chinese population and comparison with the Talcott model and artificial neural network. Support Care Cancer 2011;19:1625-35. CrossRef
11. Bone RC, Balk RA, Cerra FB, et al. Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest 1992;101:1644-55. CrossRef
12. Klastersky J, Paesmans M, Rubenstein EB, et al. The Multinational Association for Supportive Care in Cancer Risk Index: a multinational scoring system for identifying low-risk febrile neutropenic cancer patients. J Clin Oncol 2000;18:3038-51.
13. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med 2006;34:1589-96. CrossRef
14. Gaieski DF, Mikkelsen ME, Band RA, et al. Impact of time to antibiotics on survival in patients with severe sepsis or septic shock in whom early goal-directed therapy was initiated in the emergency department. Crit Care Med 2010;38:1045-53. CrossRef
15. Sammut SJ, Mazhar D. Management of febrile neutropenia in an acute oncology service. QJM 2012;105:327-36. CrossRef
16. Lim C, Bawden J, Wing A, et al. Febrile neutropenia in EDs: the role of an electronic clinical practice guideline. Am J Emerg Med 2012;30:5-11, 11.e1-5.
17. Nirenberg A, Mulhearn L, Lin S, Larson E. Emergency department waiting times for patients with cancer with febrile neutropenia: a pilot study. Oncol Nurs Forum 2004;31:711-5. CrossRef
18. Hitti EA, Lewin JJ 3rd, Lopez J, et al. Improving door-to-antibiotic time in severely septic emergency department patients. J Emerg Med 2012;42:462-9. CrossRef
19. Legrand M, Max A, Peigne V, et al. Survival in neutropenic patients with severe sepsis or septic shock. Crit Care Med 2012;40:43-9. CrossRef
 
Find HKMJ in MEDLINE:
 

Pages