Syncope and Clinical Decision Rules

Clinical Scenario and PICO question

Fresh from med school, you nervously approach your 1st ED shift.  Your 1st patient is Mr. Jones, a very active 80 y/o.  He enjoys tennis, golf, biking, boating, cards & spending time with his buddies, wife, siblings, children & grandchildren.  His wife provides the following eyewitness account.  Mr. Jones was preparing to hang a picture.  His wife heard a metal clank & turn to see his tape measure falling from his open hand & bouncing on the floor.  Mr. Jones was simply collapsing/falling backwards & Mrs. Jones couldn’t catch him in time.  He fell & hit the back of his head on the wooden floor.  Mrs. Jones ran over & found him unresponsive (no Sz activity), with a little blood coming from the back of his head.   She went to the phone & called 911.  Less than 60 secs later, she was back @ his side.  He was sitting up holding his head.  He had no idea what had happened & says he’s been “fine” since.

Mr. Jones’ ED exam was normal, including VS’s, P.Ox, no cardiac murmurs, no signs of CHF & heme (-) stool on rectal exam.  PMHx is “healthy a horse” with only HTN.  Meds = Lopressor-HCT (25/50 HCTZ-metoprolol) + 81 mg ASA/day.

CXR, plain CT head/neck, CBC, chemistries, & cardiac markers are (-).  BNP is 250.  EKG is identical to EKG from 2 years ago à NSR @ 72, with normal PR, QRS, QTc intervals & LVH.

Mr. Jones receives 1000 mg acetaminophen, Td, & 5 staples to his scalp.  4 hrs after his fall, he has had no dysrhythmias, & he wants to go home.  The ED faculty & his wife prefer admission.  When contacted, his PMD say, “I don’t like that story.  Get him admitted to a tele-bed.  My admits are covered by Dr. Hospitalist, so call him.”

Dr. Hospitalist declines the admit and launches into a prolonged explanation that includes, “My last 50 syncope admits got tele for 1-2 days, nothing ever happened & they all went home.”  “I find it amusing the ED doesn’t even know your own literature.”  “If you apply the SF Syncope Rules to Mr. Jones, he can be safely D/C from the ED.”  “If there is any additional concerns, his PMD can arrange an outpatient Holter monitor.”

When apprised of the conversation, your ED faculty parries, “Hey, I know the EM literature enough to know that the SF Syncope rules don’t work; they miss 10% of bad outcomes.  Plus isn’t there that new ROSE Syncope Rule that says to admit for elevated BNP.  Call the Hospitalist back.”  You find the Rose (Risk Stratification of Syncope in E.D.) study on line & discover Mr. Jones’ need a BNP of 300 for admission.  Your EM faculty solves the dilemma by suggesting, “Order a tilt test, a 2nd Troponin & 2ndBNP.  We’ll sign Mr. Smith out to the next team.”

Unfortunately, Harwood is part of the oncoming team & declines your sign-out plan.  “This guy needed admission 8 hours ago.  We’re not going to wait for more bogus testing.”  He suggests the EM faculty simply call Dr. Hospitalist & get Mr. Smith admitted.  “Mr. Jones meets ACEP & ESC (European Society of Cardiology) syncope admit criteria.  If you need literature, give Dr. Hospitalist the STePS (Short-Term Prognosis of Syncope) study or OESIL (Osservatorio Epidemiologico della Sincope nel Lazio).”

Your 1st shift is almost over & you haven’t even gotten your 1st Pt admitted.  You realize this residency thing isn’t as easy as the EM-3’s make it look.  Before you switch to a career in Pathology, you decide to read the Journal Club articles.  Still, you wonder:

1.   Is there a difference between a “Decision Rule”, a “Prediction Instrument”, a “Clinical Prognostic Model” and a “Guideline”?

2.   Are any of these decision tools worth using?

3.   If so, how do you tell the good “Decision Tools” from the bad ones?

P:   80 y/o male with syncope.

I:    ED discharge (based on a decision tool).

C:   Hospital admission.

O:  “Bad” outcomes in 7 &/or 30 days

 

Synopsis

Thanks to Harwood and Michelle for a bucolic mid-summer’s evening .  Also, excellent synopses and discussions by JoEllen, Gromis, Abbi, Nelson, Yenter and Ben.  The topic for Journal Club this month was decision rules; actually better described as decision instruments or tools, as they exist to augment, but never to replace physician judgment. 

The development of a decision tool should answer the following 6 questions:  Is there a need for the tool?  Was the tool derived according to sound methodologic standards?  Has the tool been prospectively validated and refined?  Has the tool been successfully implemented into clinical practice?  Would use of the tool be cost effective?  How will the tool be disseminated and implemented?

As an example of a prevalent presenting complaint that is often harmless but occasionally associated with significant morbidity or death (“low-risk, high-stakes”), we examined 2 decision instruments for syncope. 

The San Francisco Syncope Rule derivation set results were published in 2004 in the Annals of Emergency Medicine, and at the time the rule was considered a possible game-changer for syncope.   The goal of the SF Syncope Rule is to identify ED patients with syncope or near syncope who are at low risk for short-term (7 day) serious outcomes, allowing clinicians to potentially send home low-risk patients safely.  Sensitivity of the rule in the derivation cohort was 96% (95% CI 92%-100%) and specificity was 62% (95% CI 58%-66%). 

Our first article was:

  1.    

 1. Quinn JV, McDermott DA, Stiell IG, et al.  Prospective validation of the San Francisco Syncope Rule to predict patients with serious outcomes.  Ann Emerg Med. 2006;47:448–53.

In this study, the authors prospectively validated the SF Syncope Rule, evaluating 791 consecutive visits in adults for syncope, excluding patients with clear drug/trauma/alcohol/seizure associated syncope, or with altered mental status or new neurologic deficits.   Patients were predicted to be at high risk for an adverse short-term outcome if they met ANY of the following criteria:  history of CHF, Hct <30%, abnormal ECG (non-sinus or any new changes), Shortness of breath, or triage SBP <90.   The rule is often remembered by its mnemonic CHESS.  In this validation cohort,  sensitivity/specificity was based on serious outcomes that were undiagnosed during the ED visit.  Short-term serious outcomes included death, MI, arrhythmia, PE, CVA, SAH, significant hemorrhage/anemia, procedural interventions and re-hospitalization.  Sensitivity was 98% (95% CI 89%-100%) and specificity was 56% (95% CI 52%-60%). 

 

So what’s the problem?  Well, from a standpoint of decision rule development, one concern is that the same group both derived and validated the rule in the same (single) institution, raising concerns aboutexternal validity-how will it perform in another patient population?  From an internal validitystandpoint, there is no “fishbone” diagram-no accounting of how many patients were eligible for the study, how many were actually enrolled, why some weren’t enrolled, etc.  Gromis made the interesting point that patients meeting one of these criteria for high risk are already a high risk population-they could come in with an ankle sprain, and still have a risk for a serious cardiac or neuro outcome in the following 30 days-does this really help differentiate low/high risk patients, or are the factors in the rule just obvious common sense?  Second point from Gromis-there is no sensitivity analysis in the article.  Sensitivity analyses are key methodologic features of a paper wherein the authors re-analyze their data with different assumptions-what if there was one additional bad outcome which was missed?  What happens if all the lost to follow-up patients had bad outcomes…or good outcomes?  Small changes in missing or incomplete data can change the results dramatically, and this should always be discussed in the manuscript.  Miscalculating one bad outcome in this study easily drops the sensitivity of the rule to the low 90s, with lower CI limit in the 80s. 

 

Also, although ED physicians in this study made their own decisions about admission/discharge/management of the study patients, the physicians were filling out data forms, and were very aware of the rule and the study.  Significant bias was likely introduced with regards to management decisions because of this foreknowledge.  There was also probably a Hawthorne effect, with the patients’ overall care and possibly outcomes improving simply from everyone knowing about the study and making small, even unrecognized changes in care. 

 

There was also no “Table 1” in this paper; the initial chart documenting all of the demographic information about the patients in the study-key to comparing patient populations within and between studies.  Harwood asked the provocative question of when audience members are comfortable (or at least don’t feel physically ill) when they hear that one of their ED patients has died.  Is it at 7 days as in the derivation set?  Thirty days as in this validation set?  Longer?  Changing the outcome follow-up time changes one’s perspective.  As Sean said, maybe it’s just most important that the patient gets the appropriate workup, and if that is facilitated in a few days and then the patient has a bad outcome, you can at least feel that everything appropriate was done.  Ultimately, for syncope (and for other symptoms associated with possible badness), the importance of a rule is not in defining who needs to be admitted, but in defining who needs an appropriate and timely workup.

 

2. Birnbaum A, Esses D, Bijur P, et al.  Failure to validate the San Francisco Syncope Rule in an independent emergency department population.  Ann Emerg Med. 2008;52:151–9

Several studies have since been published questioning the high sensitivity initially reported for the SF Syncope Rule.  Birnbaum’s study in 2008 tested the SF Syncope Rule in 713 prospectively enrolled patients with syncope or near syncope.  They used the same inclusion, exclusion, and serious outcome definitions as the original derivation trial, as well as the original 7 day follow-up time.  It only included adults, whereas the original derivation/validation studies included children (changing expected sensitivity).  This study did provide a (nearly complete) fishbone diagram, as well as a Table including demographic specifics on the patients.  Physicians again were aware of the study and responsible for data collection, likely introducing bias.  A sensitivity analysis was performed, and making assumptions about missing data that would maximize the performance of the rule made no significant differences in the rule’s sensitivity.  In this study, the sensitivity of the SF Syncope Rule in predicting 7 day serious outcomes was 74% (95% CI 61% to 84%) with a specificity of 57% (95% CI 53%-61%).   This analysis was for serious outcomes, whether recognized in the ED or in the following 7 days.  Harwood made the point that what we care about are decision instruments that identify bad outcomes that are not obvious in the ED.  If someone has a GI bleed and happens to pass out, the primary admission diagnosis is GI bleed, not syncope-we don’t need help identifying those patients.   The authors of this paper performed a post hoc analysis of serious outcomes not identified in the ED, and the SF syncope rule performed even more poorly; sensitivity 68%.  Looking at the usefulness of the rule in another way, as Dan Nelson pointed out, from this study the negative likelihood ratio of 0.5 will influence the change from your pre to post test probability of a serious outcome by only a very limited amount.  Interestingly, the majority of serious outcomes missed by the rule were arrhythmias. 

 

3.  Reed MJ, Newby DE, et al.  The ROSE (Risk Stratification of Syncope in the  Emergency Department) Study.  J Am Coll Cardiol. 2010;55:713–21. 

Finally, a brand new syncope decision instrument, published in 2010.  What’s new and different about this tool?  First, it is a way excellent 9 page advertisement for BNP (my bias, although Biosite provided the test strips, the point of care machine, and paid for the author to travel to Spain to present the results).   The authors studied about 550 patients in a derivation cohort, and about 550 patients in a validation cohort (results of both reported in same study).  In each case, this was a little more than half of the potentially eligible patients-they missed a bunch of eligible patients, and the death rate was slightly higher in the non-enrolled patients.   Their tool, with the mnemonic BRACES, recommends admission if a patient has any of the following:  BNP >300, Bradycardia with HR <50, Rectal exam heme +, Anemia with Hct <9, Chest pain, ECG with Q waves, Saturation </= 94% on room air.  The authors reported an “excellent” sensitivity of 87.2% in the validation cohort to predict 30 day serious outcomes (specificity 65.5%).  No confidence intervals reported anyplace, so who knows how much higher your risk is than simply missing 1 in 10 bad outcomes.  No demographic information on the patients, no sensitivity analysis.  As often happens, the sensitivity dropped from 92.5% in the derivation set to 87% in the validation set…..what happens when it’s externally validated down the road?  Likely further reduction in sensitivity.  The authors use a lot of ink to discuss how great BNP was at predicting badness, although used alone it only picked up 41% of serious outcomes (an “excellent” predictor per the authors).  BNP increases with age, so could it be that BNP is just a surrogate for increasing risk in the elderly (order a BNP, or as Erik does, just ask the patient how old they are).  One small pro-BNP point-in this study they didn’t see the large number of missed arrthymias(they missed other things instead).  Maybe there’s some utility in  ordering the BNP in selected patients as an additional screen for higher risk, but this study doesn’t answer that question.  

As Vijay very reasonably asked at the end of the evening-so now what?  Neither of the reviewed syncope decision tools works well, and we still have 1% of our ED patients presenting with syncope, and approximately 4-6% of them will have serious short term outcomes not identified in the ED.  For the residents, first, it’s a reminder that medicine’s not easy.  It’s not all algorithms and checklists, but that’s also some of the beauty and joy of clinical practice (JoEllen said it better).   Channeling Harwood, if you use one of the syncope tools, especially SF, and it’s positive, you have a slam-dunk admission.  If the tool is negative, talk it over with your attending. EBM is ultimately a joining of the published evidence, clinical expertise, and patient values-having a few years of experience helps.  Andrea added an important point-remember that prior workup matters, and a prior history of benign syncope in an individual matters.  Also, the value of decision tools lies not only in their rote application, but in recognizing that the components of the tool are individually high-risk factors, and can be used to help develop your own clinical judgment.  Being familiar with the Ottawa ankle rules reminds me which parts of the ankle exam to really focus on.  As several in the audience pointed out, syncope is a complex presenting complaint, and therefore may not lend itself to the easy development of a decision instrument.  However, new rules for a variety of complaints are being rolled out every month, and understanding how to critically appraise articles describing new decision tools is crucial to helping you separate the Leatherman Wave from the Bassomatic.