Discover
Hospital Medicine Unplugged
Hospital Medicine Unplugged
Author: Roger Musa MD and Eric Bachrach MD
Subscribed: 9Played: 190Subscribe
Share
© Roger Musa MD and Eric Bachrach MD
Description
Hospital Medicine Unplugged delivers evidence-based updates for hospitalists—no fluff, just the facts. Each 30-minute episode breaks down the latest guidelines, clinical pearls, and practical strategies for inpatient care. From antibiotics to risk stratification, radiology to discharge planning, you’ll get streamlined insights you can apply on the wards today. Perfect for busy physicians who want clarity, accuracy, and relevance in hospital medicine.
143 Episodes
Reverse
In this episode of Hospital Medicine Unplugged, we sprint through prosthetic heart valves—how to choose between mechanical and bioprosthetic valves, manage anticoagulation safely, recognize complications, and navigate the expanding role of transcatheter valve replacement.
We begin with the two major categories of prosthetic valves: mechanical valves and bioprosthetic (tissue) valves. Mechanical valves are constructed from durable materials such as pyrolytic carbon and are designed to last decades, but their thrombogenic surface requires lifelong anticoagulation with a vitamin K antagonist.
Anticoagulation targets depend on valve position and risk factors.
• Mechanical aortic valve: target INR 2.5
• Mechanical mitral valve or high-risk aortic valve: target INR 3.0
In most patients, low-dose aspirin (75–100 mg daily) is added to vitamin K antagonist therapy to further reduce thromboembolic risk.
Bioprosthetic valves, in contrast, are made from porcine valves or bovine pericardium. These valves are less thrombogenic, which allows for short-term anticoagulation (typically 3–6 months) after implantation followed by lifelong antiplatelet therapy with aspirin. The trade-off is durability—structural valve degeneration (SVD) eventually occurs due to calcification, fibrosis, or leaflet tearing.
Choosing between valve types requires balancing durability versus anticoagulation risk. Mechanical valves generally offer better long-term durability, while bioprosthetic valves avoid lifelong anticoagulation but may require future reoperation.
Age is one of the most important factors in valve selection. Evidence from large observational studies demonstrates that mechanical valves provide survival advantages in younger patients, particularly:
• Aortic valve replacement: survival benefit up to about age 55
• Mitral valve replacement: survival benefit up to about age 70
Current ACC/AHA guidelines generally recommend:
• Mechanical valves: younger patients (<50 years for aortic position, <65 years for mitral)
• Bioprosthetic valves: older patients or those with contraindications to long-term anticoagulation
The treatment landscape has changed dramatically with the development of transcatheter aortic valve replacement (TAVR). Initially reserved for patients with prohibitive surgical risk, TAVR is now widely used across risk groups.
Landmark trials such as PARTNER 3 demonstrated that in low-risk patients with severe aortic stenosis, TAVR produced outcomes comparable to surgical valve replacement at five years. TAVR offers advantages including lower rates of atrial fibrillation and bleeding, though it carries higher risks of paravalvular regurgitation and pacemaker implantation.
Guidelines now recommend:
• TAVR as a Class I option for patients who are inoperable or high surgical risk
• Either TAVR or surgical replacement for patients aged 65–80 years, depending on anatomy and patient factors
Anticoagulation management remains one of the most critical aspects of prosthetic valve care. Direct oral anticoagulants (DOACs are contraindicated in mechanical valves). The RE-ALIGN trial showed increased thromboembolic and bleeding complications with dabigatran compared with warfarin, leading to early termination of the study. More recently, the PROACT Xa trial evaluating apixaban in patients with On-X mechanical valves also demonstrated excess thromboembolic events.
For bioprosthetic valves, however, DOACs may be used in patients who develop atrial fibrillation, although long-term data remain limited.
Despite technological advances, prosthetic valves carry important complications. One of the most serious is prosthetic valve endocarditis (PVE), which is associated with high mortality. Management requires prolonged intravenous antibiotics, typically for at least six weeks, and surgery may be required for heart failure, uncontrolled infection, or large vegetations.
Another major complication is prosthetic valve thrombosis, particularly with mechanical valves. Management depends on clinical severity and thrombus size. Options include urgent surgery or low-dose, slow-infusion fibrinolysis, with modern thrombolytic protocols achieving over 90% success rates and low complication rates.
A subtler but clinically important issue is prosthesis–patient mismatch (PPM). This occurs when the effective orifice area of the prosthetic valve is too small relative to the patient’s body surface area, creating persistent obstruction despite valve replacement. Severe PPM is defined as indexed effective orifice area <0.65 cm²/m² and is associated with higher mortality, reduced left ventricular mass regression, and worse heart failure outcomes.
Certain populations require special consideration. Pregnancy with mechanical valves carries substantial maternal and fetal risk due to the competing challenges of anticoagulation and thrombosis prevention. Warfarin provides the most reliable valve protection but carries risk of embryopathy at higher doses. For women planning pregnancy, bioprosthetic valves are often preferred, even though they may require earlier replacement.
Finally, structural valve degeneration is the major limitation of bioprosthetic valves. The process occurs gradually and progresses through stages—from early structural changes without hemodynamic impact to severe valve dysfunction requiring reintervention. Risk factors for accelerated degeneration include younger age, renal disease, hypertension, hyperparathyroidism, and prosthesis–patient mismatch.
Because degeneration can occur silently, lifelong surveillance with periodic echocardiography is essential for all patients with prosthetic valves.
We close with the key clinical takeaways:
• Mechanical valves offer durability but require lifelong warfarin therapy
• Bioprosthetic valves avoid chronic anticoagulation but eventually degenerate
• DOACs are contraindicated in mechanical valves
• TAVR has expanded treatment options across all surgical risk groups
• Complications include thrombosis, prosthetic valve endocarditis, structural degeneration, and prosthesis–patient mismatch
Prosthetic valve management is a lifelong process—successful outcomes depend not only on the procedure itself but on careful anticoagulation, surveillance, and early recognition of complications.
In this episode of Hospital Medicine Unplugged, we sprint through asbestosis—understand how inhaled fibers trigger progressive pulmonary fibrosis, recognize key radiographic features, and manage patients with attention to malignancy risk and progressive fibrotic disease.
We start with pathophysiology, where the story begins decades before symptoms appear. After inhalation, asbestos fibers deposit in the distal airways and alveoli. Alveolar macrophages attempt to engulf these fibers, but many fibers are too long to be fully internalized—triggering “frustrated phagocytosis.” This leads to persistent macrophage activation and release of inflammatory mediators including TNF-α, IL-1, and TGF-β. At the same time, reactive oxygen species form both from macrophage activation and from iron on the fiber surface, amplifying oxidative injury.
A key early event is alveolar epithelial cell apoptosis, driven by mitochondrial injury, p53-mediated pathways, and endoplasmic reticulum stress. Loss of epithelial integrity and chronic inflammation stimulate fibroblast activation and collagen deposition, ultimately producing the progressive interstitial fibrosis that defines asbestosis.
Not all asbestos fibers carry the same risk. Amphibole fibers—particularly crocidolite and amosite—are far more fibrogenic and carcinogenic than chrysotile fibers. Their needle-like shape, durability, and resistance to biological clearance allow them to persist in lung tissue for decades. Fiber dimensions matter: long fibers (>10–20 μm) and extremely thin fibers (<0.25 μm) pose the highest disease risk because they reach distal lung regions and resist macrophage clearance.
One of the defining features of asbestos disease is extraordinary latency. Clinical asbestosis usually develops 20–40 years after the first exposure, with peak disease occurrence around 40–45 years after exposure begins. Lung cancer tends to occur earlier, typically 30–35 years after exposure. Disease progression varies—some patients remain stable while others develop progressive fibrotic lung disease with significant annual declines in FVC, particularly those with fibrotic patterns on HRCT.
Diagnosis relies on a combination of exposure history, latency, imaging, and pulmonary function testing. According to consensus guidelines, the diagnosis requires:
• Documented asbestos exposure
• Appropriate latency interval
• Radiographic evidence of interstitial fibrosis
• Restrictive lung disease with reduced DLCO
While chest X-ray can detect classic small irregular opacities, high-resolution CT is far more sensitive. Key HRCT findings include:
• Subpleural curvilinear lines (one of the most specific findings)
• Intralobular and interlobular septal thickening
• Parenchymal bands
• Honeycombing in advanced disease
Importantly, most patients with asbestosis also show benign pleural abnormalities, such as pleural plaques or diaphragmatic pleural thickening, which strongly support asbestos exposure.
Unfortunately, no disease-modifying therapies are currently approved specifically for asbestosis. Management traditionally focuses on supportive care, including:
• Smoking cessation
• Vaccination against influenza and pneumococcus
• Pulmonary rehabilitation
• Oxygen therapy for hypoxemia
However, the treatment landscape is evolving. Because asbestosis can behave like other progressive fibrosing interstitial lung diseases, antifibrotic therapies are increasingly considered for patients with progressive disease. Nintedanib, approved for progressive fibrosing ILD, may slow lung function decline in patients with progressive asbestosis. Early studies of pirfenidone suggest acceptable safety and potential benefit, though definitive evidence remains limited.
Another critical dimension of asbestos exposure is malignancy risk. Asbestos causes two to six times more lung cancers than mesotheliomas, making asbestos-related lung cancer a major public health burden. The interaction with smoking is particularly dangerous: asbestos and smoking have a synergistic effect on lung cancer risk. In exposed workers, the combined effect can increase lung cancer mortality more than 30-fold.
Importantly, asbestos exposure increases lung cancer risk even in nonsmokers, but smoking cessation dramatically reduces risk over time. Within 10 years of quitting, lung cancer mortality drops significantly, and after 30 years, risk approaches that of never-smokers.
For malignant mesothelioma, amphibole fibers again carry the greatest risk. Crocidolite exposure has the highest potency, and mesothelioma risk continues to rise 40–50 years after initial exposure.
Because treatment options for mesothelioma remain limited, prevention and early detection are essential. The most effective intervention is elimination of exposure, enforced through occupational safety regulations, air monitoring, and protective equipment.
For individuals with significant asbestos exposure, low-dose CT screening can help detect early lung cancers, improving survival outcomes. Unfortunately, routine screening for mesothelioma is not currently recommended, as effective early detection strategies remain lacking.
We close with the key clinical takeaways:
• Asbestosis results from chronic macrophage activation and epithelial injury triggered by persistent asbestos fibers
• Amphibole fibers pose the highest risk due to their durability and biopersistence
• Disease latency often exceeds 20–40 years
• HRCT is the most sensitive imaging modality for diagnosis
• Antifibrotic therapies may benefit patients with progressive fibrotic phenotypes
• Smoking cessation dramatically reduces asbestos-related lung cancer risk
Asbestosis remains a preventable occupational disease, but its long latency means clinicians continue to encounter its consequences decades after exposure—making exposure history, imaging recognition, and malignancy surveillance critical in modern clinical practice.
In this episode of Hospital Medicine Unplugged, we sprint through glomerulonephritis—recognize the nephritic syndrome, decode complement patterns and immunofluorescence clues, and manage diseases ranging from self-limited post-infectious GN to rapidly progressive crescentic disease.
We start with the clinical syndrome of glomerulonephritis, defined by glomerular inflammation producing hematuria, hypertension, edema, and reduced kidney function. The classic picture is nephritic syndrome—tea- or cola-colored urine, oliguria, periorbital edema, and elevated blood pressure. At the microscopic level, RBC casts are the pathognomonic finding, proving that bleeding originates from the glomerulus rather than the urinary tract.
Understanding disease requires revisiting the glomerular filtration barrier, composed of three layers: fenestrated endothelium, the glomerular basement membrane (GBM), and podocytes connected by slit diaphragms. This barrier normally filters plasma while retaining proteins. Podocytes are terminally differentiated and poorly regenerative, making them particularly vulnerable to immune-mediated injury.
The core pathophysiology of GN is immune-mediated inflammation. Antibodies, immune complexes, and complement activation trigger inflammatory cascades within the glomerulus. This leads to endocapillary proliferation, mesangial expansion, and leukocyte infiltration, narrowing capillary lumens and lowering GFR. Capillary wall damage allows red blood cells to leak into urine, while the sudden decline in filtration drives sodium and water retention, producing hypertension and edema.
Modern classification emphasizes pathogenesis rather than morphology, and most GN falls into five categories:
• Immune-complex GN – granular immunoglobulin deposition (post-infectious GN, IgA nephropathy, lupus nephritis, MPGN)
• Pauci-immune GN – minimal immune deposition, typically ANCA-associated vasculitis
• Anti-GBM disease – linear IgG staining along the basement membrane
• Monoclonal immunoglobulin GN – related to plasma cell disorders
• C3 glomerulopathy – dominant complement deposition from alternative pathway dysregulation
Epidemiology varies by disease. Post-streptococcal GN primarily affects children aged 2–10 years, particularly in developing regions. In contrast, IgA nephropathy is the most common primary glomerular disease worldwide and typically presents in young adults. Interestingly, epidemiology has shifted: childhood PSGN is declining, while adult infection-related GN—often associated with staphylococcal infections—is increasing.
Clinical presentation depends on the underlying disease. Post-streptococcal GN typically occurs 1–12 weeks after a streptococcal infection, producing abrupt edema, hypertension, and hematuria. IgA nephropathy, in contrast, often presents with synpharyngitic hematuria—visible hematuria occurring simultaneously with an upper respiratory infection.
The urinalysis is the diagnostic cornerstone. Key findings include dysmorphic red blood cells, RBC casts, and mild-to-moderate proteinuria. Complement levels help narrow the differential:
• Low C3 and low C4: lupus nephritis, cryoglobulinemia, immune-complex MPGN
• Low C3 with normal C4: post-infectious GN or C3 glomerulopathy
• Normal complement: IgA nephropathy, ANCA-associated GN, anti-GBM disease
A crucial teaching point: C3 should normalize within 6–8 weeks in post-streptococcal GN. Persistent hypocomplementemia suggests another diagnosis, such as lupus nephritis or MPGN.
Additional testing includes ASO titers and anti-DNase B antibodies for streptococcal infection, autoimmune markers such as ANA and ANCA, and viral testing for hepatitis B, hepatitis C, and HIV.
Imaging plays a limited role. Renal ultrasound typically shows normal or enlarged kidneys in acute GN, helping distinguish acute inflammatory disease from chronic kidney disease.
When the diagnosis remains unclear—or when disease is severe—a kidney biopsy is essential. Histology reveals characteristic patterns:
• Post-infectious GN: diffuse endocapillary proliferation with neutrophils
• Crescents: severe injury indicating rapidly progressive GN
• Electron microscopy: classic subepithelial “hump-shaped” deposits in post-infectious disease
Immunofluorescence patterns guide classification:
• Granular deposition: immune-complex GN
• Linear IgG: anti-GBM disease
• Pauci-immune: ANCA vasculitis
• C3-dominant: C3 glomerulopathy
Management begins with supportive care for all patients. Key interventions include:
• Blood pressure control, ideally ≤120/70 mmHg with ACE inhibitors or ARBs
• Sodium restriction (<2 g/day)
• Diuretics for volume overload
• Fluid restriction when necessary
Treatment then becomes etiology-specific.
For post-infectious GN, therapy is largely supportive because the disease usually resolves spontaneously. Edema typically improves within 1–2 weeks, and renal function normalizes within about 4 weeks.
IgA nephropathy, however, may progress. Treatment strategies now include:
• Targeted-release budesonide to reduce intestinal IgA production
• SGLT2 inhibitors for kidney protection
• Sparsentan, a dual endothelin–angiotensin receptor antagonist
• Iptacopan, a complement pathway inhibitor
In rapidly progressive (crescentic) GN, immediate therapy is essential. Treatment typically includes:
• High-dose glucocorticoids (IV methylprednisolone followed by oral prednisone)
• Cyclophosphamide or rituximab
• Plasma exchange for anti-GBM disease
For ANCA-associated vasculitis, rituximab is now equivalent to cyclophosphamide for induction therapy and may be superior in relapsing disease.
Prognosis varies widely. Post-streptococcal GN in children generally resolves completely, though subtle long-term kidney risk may remain. IgA nephropathy has a much more variable course, with up to half of patients progressing to kidney failure within 10 years.
Predictors of poor outcome include:
• Persistent hypertension
• Proteinuria >1 g/day
• Reduced eGFR at diagnosis
• Extensive fibrosis or crescents on biopsy
We close with the high-yield clinical pearls:
• RBC casts confirm glomerular bleeding
• Complement patterns dramatically narrow the differential diagnosis
• Synpharyngitic hematuria points to IgA nephropathy
• Crescents signal severe rapidly progressive disease
• Persistently low C3 beyond 8 weeks argues against post-streptococcal GN
Glomerulonephritis is not a single disease but a spectrum of immune-mediated kidney disorders—and successful management depends on integrating clinical presentation, serology, complement levels, and biopsy findings to identify the exact mechanism driving glomerular injury.
In this episode of Hospital Medicine Unplugged, we sprint through thyroid cancer—understand the epidemiologic paradox of rising incidence but stable mortality, stage disease using modern AJCC criteria, apply ATA recurrence risk stratification, and tailor therapy from surgery and radioiodine to targeted molecular treatments.
We start with the epidemiology of thyroid carcinoma, the most common endocrine malignancy and the ninth most common cancer worldwide. In 2022 alone, there were roughly 821,000 new cases and 47,500 deaths globally. The disease shows a strong female predominance—about three quarters of cases occur in women, and the median age at diagnosis is in the early 50s. Notably, thyroid cancer is also the most common malignancy among adolescents and young adults aged 16–33 years.
One of the most striking trends is the dramatic rise in incidence over the past four decades. Global age-standardized incidence increased substantially from about 2.1 per 100,000 in 1990 to over 3.1 per 100,000 in 2017, with extremely high rates reported in countries such as South Korea, Cyprus, Ecuador, China, and Turkey. Yet mortality has remained remarkably stable at roughly 0.5 per 100,000, suggesting that much of the increase reflects overdiagnosis rather than a true surge in aggressive disease.
The driver behind this phenomenon is increased detection of small papillary thyroid cancers, often discovered incidentally during thyroid ultrasonography or cross-sectional imaging. Some studies estimate that more than 75% of thyroid cancers globally may represent overdiagnosis, particularly in high-income countries where imaging is widespread. Encouragingly, incidence rates have begun to plateau or decline in some regions following guideline changes discouraging unnecessary biopsy and treatment of very small nodules.
Next, we turn to staging, which guides prognosis and management. The AJCC 8th edition TNM staging system introduced an important shift by raising the prognostic age cutoff from 45 to 55 years. This reflects the excellent survival outcomes seen in younger patients.
For patients younger than 55 years, staging is remarkably simple:
• Stage I: any tumor size, any lymph node status, no distant metastasis
• Stage II: distant metastasis present
This simplified system reflects the outstanding prognosis in younger individuals, with more than 98% survival regardless of tumor characteristics.
For patients 55 years and older, staging becomes more detailed and incorporates tumor size, lymph node involvement, and extrathyroidal extension. Importantly, the 8th edition refined the definition of extrathyroidal extension so that only gross invasion of strap muscles qualifies for T3b staging, which has downstaged many patients and improved prognostic accuracy.
However, staging alone does not fully predict recurrence. That role belongs to the American Thyroid Association (ATA) risk stratification system, which categorizes patients as low, intermediate, or high risk of recurrence.
Approximate recurrence rates are:
• Low risk: ~1.5%
• Intermediate risk: ~5% overall
• High risk: ~25%
A key innovation in ATA management is dynamic risk stratification, where risk is continuously updated based on response to therapy.
Response categories include:
• Excellent response: ~4.7% recurrence risk
• Indeterminate response: ~17% recurrence
• Biochemically incomplete: ~58% recurrence
• Structurally incomplete: ~84% recurrence
This dynamic approach allows clinicians to de-escalate surveillance and treatment for patients who demonstrate excellent responses over time.
At the molecular level, thyroid cancer has a remarkably simple genomic landscape, dominated by mutations activating the MAPK signaling pathway.
The most common driver mutation is BRAF V600E, found in about 60% of papillary thyroid cancers. This mutation is associated with classic and tall-cell variants, increased lymph node metastases, and reduced responsiveness to radioactive iodine due to suppression of the sodium-iodide symporter.
Another important group includes RAS mutations, seen in follicular thyroid cancers and follicular-variant papillary carcinomas. These tumors often demonstrate vascular invasion but retain better responsiveness to radioactive iodine therapy.
Chromosomal rearrangements also play a role. RET/PTC fusions are common in radiation-induced thyroid cancers and pediatric cases, while TERT promoter mutations—particularly when combined with BRAF mutations—are associated with aggressive disease and poor prognosis.
Management of differentiated thyroid cancer increasingly emphasizes risk-adapted therapy, particularly regarding radioactive iodine (RAI).
RAI can serve three purposes:
• Remnant ablation after surgery
• Adjuvant therapy to reduce recurrence risk
• Treatment of known metastatic disease
Modern guidelines recommend avoiding routine RAI in low-risk patients, particularly those with small intrathyroidal papillary cancers ≤2 cm, negative lymph nodes, low postoperative thyroglobulin, and normal ultrasound.
RAI becomes selectively recommended when additional risk factors are present, such as tumors >4 cm, nodal metastases, lymphatic invasion, aggressive histologic variants, or elevated postoperative thyroglobulin levels.
For high-risk disease, including gross extrathyroidal extension, extensive nodal disease, or distant metastases, RAI therapy remains standard because it improves both survival and recurrence outcomes.
When thyroid cancer becomes radioiodine-refractory, systemic therapy may be required. Several targeted agents are now available.
First-line multikinase inhibitors include:
• Lenvatinib, which significantly improves progression-free survival and has response rates approaching 65%
• Sorafenib, with more modest response rates but proven efficacy
Second-line therapy includes cabozantinib for patients progressing after first-line agents.
Precision medicine is increasingly important. Tumors with specific driver mutations may respond dramatically to targeted therapies:
• RET fusions: treated with selpercatinib or pralsetinib, with response rates approaching 80–90%
• BRAF V600E: targeted with dabrafenib plus trametinib, particularly in anaplastic thyroid cancer
• NTRK fusions: treated with larotrectinib or entrectinib, with high response rates
Despite these advances, systemic therapy is not curative and often causes substantial side effects. Therefore, treatment should generally be reserved for progressive or symptomatic disease, while patients with slow-growing, asymptomatic metastases may be safely observed.
We close with the key clinical takeaways:
• Rising thyroid cancer incidence largely reflects overdiagnosis of small papillary tumors
• AJCC staging predicts survival, while ATA risk stratification predicts recurrence
• Dynamic response-to-therapy assessment allows individualized long-term management
• Molecular drivers like BRAF, RAS, and RET shape prognosis and treatment options
• Radioactive iodine is increasingly used selectively rather than routinely
Thyroid cancer illustrates a modern shift in oncology—from one-size-fits-all treatment toward precision risk-adapted care, balancing effective therapy with avoidance of unnecessary intervention.
In this episode of Hospital Medicine Unplugged, we sprint through anaphylaxis—recognize the rapid systemic reaction, understand the mast-cell storm driving shock, and deliver epinephrine immediately to prevent cardiovascular collapse.
We begin with the definition and diagnostic framework. Anaphylaxis is a severe, rapid-onset, life-threatening systemic hypersensitivity reaction. When it progresses to circulatory collapse with profound vasodilation and vascular leak, it becomes anaphylactic shock, a form of distributive shock with relative hypovolemia. Modern guidelines define anaphylaxis clinically: acute onset of illness with skin or mucosal symptoms plus respiratory compromise, hypotension, or severe gastrointestinal symptoms, or hypotension/bronchospasm after known allergen exposure—even without skin findings.
Next comes epidemiology. Anaphylaxis occurs in roughly 50–112 episodes per 100,000 person-years, and 1.6–5.1% of adults in the United States experience an episode during their lifetime. Fortunately, modern treatment has kept mortality low. Case fatality rates among emergency presentations are about 0.25–0.33%, translating to roughly 186–225 deaths per year in the United States.
Triggers vary by age. Food-induced anaphylaxis predominates in young children, while medication-induced reactions are more common in adults, especially those over age 50.
At the core of anaphylaxis lies mast-cell and basophil activation. In classic IgE-mediated type I hypersensitivity, an allergen first sensitizes the immune system, leading B cells to produce IgE antibodies that bind to high-affinity FcεRI receptors on mast cells and basophils. On re-exposure, allergen cross-linking of IgE triggers rapid cellular degranulation.
This releases a cascade of mediators including:
• Histamine and tryptase
• Leukotrienes and prostaglandins
• Platelet-activating factor (PAF)
• Cytokines such as IL-4 and IL-6
These mediators cause vasodilation, endothelial barrier disruption, bronchoconstriction, and massive capillary leak, shifting fluid from the intravascular space to tissues. The result is hypotension, airway compromise, and multisystem dysfunction.
Not all anaphylaxis is IgE mediated. Alternative mechanisms include IgG-mediated reactions, complement activation, contact system activation, and direct mast-cell activation via MRGPRX2 receptors. Clinically, these non-IgE pathways can produce identical presentations, which is why anaphylaxis remains a clinical diagnosis rather than a laboratory one.
The most common triggers fall into three major groups:
• Foods (≈32–37%)
• Medications (≈21–58%)
• Insect venom (≈15–25%)
In the United States, nine foods account for over 90% of IgE-mediated food allergies: milk, egg, wheat, soy, peanuts, tree nuts, fish, shellfish, and sesame. Peanuts remain the leading cause of fatal food-related anaphylaxis.
An emerging cause is alpha-gal syndrome, a delayed meat allergy triggered by tick bites, affecting tens to hundreds of thousands of individuals in the United States.
Medications are the most common trigger in adults, particularly beta-lactam antibiotics, followed by NSAIDs, biologic agents, chemotherapy drugs, and ACE inhibitors.
Another important concept is cofactors—conditions that lower the threshold for anaphylaxis. These include exercise, alcohol, infection, menstruation, and NSAID use. A classic example is food-dependent exercise-induced anaphylaxis, where patients tolerate a food normally but develop anaphylaxis if they exercise soon after ingestion.
Diagnosis relies on clinical criteria, most commonly the NIAID/FAAN criteria. Anaphylaxis is highly likely when there is acute involvement of skin or mucosa plus respiratory compromise or hypotension, multisystem involvement after allergen exposure, or isolated hypotension after exposure to a known trigger.
Laboratory confirmation is not required in the acute setting, but serum tryptase measured 90 minutes to 4 hours after symptom onset can support the diagnosis. An increase of 1.2 × baseline plus 2 ng/mL strongly suggests mast-cell activation.
Now to the most critical step: treatment.
Epinephrine is the first-line and life-saving therapy.
It should be administered immediately intramuscularly into the mid-anterolateral thigh at a dose of:
• 0.01 mg/kg of 1 mg/mL solution
• Maximum 0.3 mg in children
• Maximum 0.5 mg in adults
Doses can be repeated every 5–15 minutes if symptoms persist. There are no contraindications to epinephrine in anaphylaxis—even in patients with cardiovascular disease.
Delayed epinephrine administration is the strongest modifiable risk factor for fatal anaphylaxis.
Adjunctive therapies follow epinephrine but never replace it. These include:
• High-flow oxygen
• Aggressive IV crystalloid resuscitation
• H1 antihistamines (diphenhydramine)
• H2 blockers
• Systemic corticosteroids
• Bronchodilators for bronchospasm
For patients taking beta-blockers who develop refractory hypotension, glucagon may be used because it increases cyclic AMP independent of beta receptors.
Rarely, patients develop refractory anaphylactic shock, defined by persistent symptoms despite multiple epinephrine doses. Management may require IV epinephrine infusion, additional vasopressors, airway management, and aggressive fluid resuscitation. In extreme cases, methylene blue or extracorporeal life support has been used as rescue therapy.
Another important phenomenon is biphasic anaphylaxis, where symptoms recur after initial resolution without new allergen exposure. This occurs in about 5–6% of cases, typically within 10 hours.
Recommended observation periods depend on severity:
• Minimum 4 hours for uncomplicated reactions
• 6–8 hours with respiratory compromise
• 12–24 hours for hypotension or severe reactions
Longer monitoring is advised for patients with severe initial presentation, multiple epinephrine doses, asthma, unknown trigger, or prior biphasic reactions.
Special consideration is needed for patients on beta-blockers or ACE inhibitors, which may increase severity of reactions and blunt compensatory cardiovascular responses. These medications are not absolutely contraindicated, but management should involve shared decision-making regarding risks and benefits.
Long-term management focuses on prevention and preparedness.
All patients who experience anaphylaxis should receive:
• Epinephrine autoinjectors
• Education on prompt use
• Referral for allergy evaluation and trigger identification
Patients should carry two autoinjectors at all times, as repeat dosing may be required.
Additional prevention strategies include allergen avoidance, personalized emergency action plans, medical alert identification, and immunotherapy when appropriate. For example, venom immunotherapy is highly effective for Hymenoptera sting allergy, and oral immunotherapy is now available for peanut allergy.
We close with the key clinical pearls:
• Anaphylaxis is a clinical diagnosis—do not delay treatment
• Epinephrine IM is the first and most important therapy
• Food, medications, and insect venom are the most common triggers
• Cofactors like exercise or alcohol can lower reaction thresholds
• Always prescribe epinephrine autoinjectors and educate patients before discharge
Recognize the signs, inject epinephrine early, and monitor carefully—because in anaphylaxis, minutes matter and rapid treatment saves lives.
In this episode of Hospital Medicine Unplugged, we sprint through upper motor neuron (UMN) syndromes—how spasticity develops, how to separate true reflex hyperexcitability from fixed stiffness, and how to diagnose and manage major UMN diseases like PLS and hereditary spastic paraplegia.
We begin with spasticity, a defining feature of UMN injury that is not simply an immediate “release phenomenon.” Its delayed appearance after stroke or spinal cord injury points to maladaptive plasticity in both the spinal cord and brain. The core problem is loss of descending inhibitory control, with reduced corticospinal input, increased reticulospinal drive, impaired spinal inhibitory circuits, heightened alpha motor neuron excitability, and reduced postactivation depression, especially with immobilization.
Clinically, spasticity is a velocity-dependent increase in tone caused by hyperexcitable stretch reflexes. But bedside hypertonia often has two components: reflex-mediated spasticity and intrinsic soft tissue stiffness from contracture and rheologic muscle change. That distinction matters. The Modified Ashworth Scale measures overall resistance, but the Tardieu Scale better separates dynamic spasticity from fixed mechanical tightness.
We then turn to primary lateral sclerosis (PLS), the prototypical adult UMN-predominant degenerative disorder. The 2020 consensus criteria define probable PLS as 2–4 years of progressive UMN syndrome and definite PLS as more than 4 years of symptoms. That time threshold matters because shorter duration carries higher risk of later ALS conversion. Even in clinically pure PLS, minor EMG abnormalities like fasciculations or fibrillations can occur, especially with longer disease duration.
Next is hereditary spastic paraplegia (HSP), a genetically diverse disorder marked by bilateral leg spasticity, hyperreflexia, and extensor plantar responses from length-dependent corticospinal tract degeneration. HSP is classified into pure forms, where spastic paraparesis dominates, and complex forms, where additional neurologic features appear. Although SPAST (SPG4) and SPG7 are among the most common mutations, a genetic diagnosis is still achieved in only a minority of patients.
MRI is the gold standard imaging study in suspected UMN disease. Key findings include corticospinal tract T2/FLAIR hyperintensity, especially in the posterior limb of the internal capsule and cerebral peduncles, as well as the “motor band sign,” a T2/SWI hypointensity in the precentral gyrus reflecting iron deposition. Motor cortex atrophy may be even more sensitive than the physical exam for detecting UMN degeneration and can appear before overt clinical signs.
We also highlight the distinction between pseudobulbar palsy and bulbar palsy. Pseudobulbar palsy comes from bilateral corticobulbar UMN lesions and produces spastic dysarthria, brisk jaw jerk, and exaggerated gag reflex. It is often accompanied by pseudobulbar affect—uncontrollable laughing or crying out of proportion to context. In contrast, bulbar palsy reflects LMN dysfunction of cranial nerve nuclei or nerves.
Management of spasticity follows a stepwise approach. Start with physical therapy, stretching, splinting, bracing, and positioning. For focal spasticity, botulinum toxin A has the strongest evidence and is preferred because it improves tone without systemic sedation. For generalized spasticity, oral agents include baclofen, tizanidine, benzodiazepines, and dantrolene, though benefit is often limited by sedation or weakness. In severe refractory cases, intrathecal baclofen pumps provide greater efficacy at lower doses—but pump failure can cause life-threatening withdrawal.
We close with the take-home moves: recognize that spasticity is dynamic neurophysiology plus biomechanics, use MRI and timeline to refine the diagnosis, distinguish pseudobulbar from bulbar syndromes, and treat spasticity with a layered rehab-first strategy before escalating to botulinum toxin or intrathecal therapy.
UMN syndromes are about more than “increased tone”—they are disorders of descending control, adaptive failure, and progressive disability, and careful phenotyping is what turns bedside findings into precise diagnosis and treatment.
In this episode of Hospital Medicine Unplugged, we break down endovascular infections—vascular graft infections, mycotic aneurysms, CIED infections, and septic thrombophlebitis syndromes—focusing on modern epidemiology, evolving microbiology, advanced imaging, and high-yield management strategies.
We begin with epidemiology clinicians should know. Vascular graft infections occur in ~0.5–6% of vascular reconstructions, while endovascular device infections occur in ~0.2–5% of procedures, with TEVAR carrying higher infection risk than abdominal repairs. Meanwhile, cardiac implantable electronic device (CIED) infections have increased substantially, reflecting growing device use.
Next we review key microbiology. Gram-positive cocci cause most vascular graft infections, with coagulase-negative staphylococci now more common than S. aureus. MRSA is increasingly prevalent and linked to worse outcomes, while Pseudomonas aeruginosa leads among gram-negative pathogens. For mycotic aneurysms, Salmonella species remain classic causes, with rare pathogens including Listeria and Mycobacterium tuberculosis.
We then highlight important clinical syndromes.
Lemierre syndrome presents with pharyngitis, internal jugular vein thrombosis, and septic emboli (often pulmonary) and is classically caused by Fusobacterium necrophorum, though MRSA and polymicrobial infections are increasingly recognized. Pylephlebitis, sometimes called the abdominal variant of Lemierre syndrome, involves septic thrombosis of the portal venous system.
Diagnosis relies heavily on advanced imaging. CTA is typically the first-line test, while 18F-FDG PET/CT provides high sensitivity (~94%) and excellent negative predictive value, especially in late graft infections. TEE remains essential for suspected CIED infection, though repeat imaging may be needed if initial studies are negative.
Management requires combined antimicrobial and procedural strategies.
• ≥6 weeks of antibiotics for most vascular graft infections
• Device removal for confirmed CIED infection, with early extraction improving survival
• 3–6 weeks of antibiotics for Lemierre syndrome, often metronidazole plus a β-lactam, with MRSA coverage in high-risk patients
• Lifelong suppressive therapy may be needed when infected devices cannot be removed
We close with key controversies and outcomes. Anticoagulation for septic thrombophlebitis remains debated, though many experts consider 6–12 weeks of therapy in selected patients. Despite advances, vascular graft infections and mycotic aneurysms carry high mortality—often exceeding 30% at one year—especially with MRSA.
Early recognition, PET/CT-guided diagnosis, aggressive antibiotics, and timely device removal remain the pillars of care for these complex infections.
In this episode of Hospital Medicine Unplugged, we sprint through delirium tremens—the most dangerous stage of alcohol withdrawal—recognize the neurochemical storm, identify high-risk patients, and treat aggressively with benzodiazepines and supportive care to prevent fatal complications.
We begin with epidemiology and why DTs matter. Delirium tremens occurs in 3–5% of hospitalized patients with alcohol withdrawal and represents the most severe manifestation of the withdrawal spectrum. The syndrome combines acute delirium—rapidly fluctuating attention and cognition—with severe autonomic hyperactivity. Historically mortality approached 15%, but with modern aggressive treatment it has fallen to about 1–4%. When death occurs, it is usually due to hyperthermia, malignant arrhythmias, withdrawal seizures, or underlying medical illness.
Next comes the neurobiology driving withdrawal. Chronic alcohol exposure forces the brain to compensate for alcohol’s depressant effects. Over time:
• NMDA glutamate receptors are upregulated
• GABA-A inhibitory receptors are downregulated
While alcohol is present, its GABA-enhancing and NMDA-suppressing effects maintain balance. When alcohol is abruptly stopped, that balance collapses. The result is unopposed excitatory neurotransmission, increased glutamate signaling, reduced GABA inhibition, and massive central nervous system hyperexcitability. Additional contributors include increased norepinephrine activity, dopaminergic alterations, and calcium-mediated excitotoxicity, producing the agitation, tremor, and seizure risk characteristic of severe withdrawal.
Risk stratification is essential because not every patient with withdrawal develops delirium tremens. The strongest predictor is a prior history of DTs, which carries a likelihood ratio of roughly 2.9 for recurrence. Other important risk factors include:
• Recent withdrawal seizures, especially multiple seizures
• High CIWA-Ar scores (>15) with tachycardia or hypertension
• Older age (≥55 years)
• Concurrent illness such as infection, trauma, electrolyte abnormalities, or liver disease
• Hypokalemia and metabolic derangements
Another key concept is the kindling effect. Repeated withdrawal episodes progressively sensitize neuronal circuits, meaning each withdrawal episode tends to become more severe than the last.
The timeline of alcohol withdrawal follows a predictable pattern.
• 6–12 hours: early withdrawal—tremor, anxiety, tachycardia
• 12–24 hours: alcoholic hallucinosis (visual or auditory hallucinations)
• 12–48 hours: withdrawal seizures
• 72–96 hours: onset of delirium tremens, typically lasting 2–3 days but up to a week
Importantly, about one-third of untreated withdrawal seizures progress to delirium tremens, making early treatment critical.
Clinically, DTs presents with severe agitation and delirium combined with autonomic instability. Key features include:
• Fluctuating confusion and disorientation
• Marked agitation and psychomotor hyperactivity
• Tachycardia, hypertension, fever, and diaphoresis
• Coarse tremor and hyperreflexia
• Vivid visual hallucinations, often insects or animals
Diagnosis is clinical—delirium occurring in the context of alcohol withdrawal. The CIWA-Ar scale helps quantify withdrawal severity, but it becomes less reliable once patients develop delirium because it depends on patient responses. In ICU settings, clinicians often switch to CAM-ICU, RASS, MINDS, or DDS scales.
Laboratory evaluation should focus on complications and reversible triggers. Important tests include:
• Electrolytes (magnesium, potassium, phosphate)
• Glucose
• Liver function tests
• Creatine kinase for rhabdomyolysis risk
Neuroimaging should be obtained if the presentation is atypical or focal neurologic deficits are present.
Management centers on rapid sedation and physiologic stabilization.
Benzodiazepines are first-line therapy, acting as GABA-A agonists to counter the hyperexcitable brain. Two dosing strategies dominate:
• Symptom-triggered therapy, which reduces medication exposure and treatment duration
• Front-loading with high doses for severe withdrawal (CIWA-Ar ≥19)
Clinicians should not fear very high benzodiazepine doses. Severe DTs may require hundreds of milligrams of diazepam per day, and case reports describe successful treatment with 260–480 mg/day.
Agent selection depends on clinical context:
• Diazepam or chlordiazepoxide – preferred long-acting agents for front-loading
• Lorazepam – preferred in liver disease, since it lacks active metabolites
For benzodiazepine-refractory DTs, ICU-level therapies may be required.
Adjunctive options include:
• Phenobarbital, which enhances GABA signaling and may reduce mechanical ventilation risk
• Propofol, used in intubated patients with refractory agitation
• Dexmedetomidine, an α2-agonist that suppresses sympathetic overactivity while allowing arousable sedation
Antipsychotics should never be used as monotherapy, as they lower seizure threshold. They may be used only as adjuncts for severe hallucinations or agitation.
Supportive care is equally critical. One rule must never be forgotten:
Thiamine must be administered before glucose.
High-dose thiamine—500 mg IV once or twice daily for several days—prevents Wernicke encephalopathy, a devastating but preventable neurologic complication.
Other essential measures include:
• IV fluids and hydration
• Aggressive correction of magnesium, potassium, and phosphate
• Temperature control and frequent vital monitoring
• Calm, well-lit environment with reassurance and reorientation
Patients should be admitted to the ICU when they develop delirium tremens, severe withdrawal (CIWA ≥20), refractory symptoms despite benzodiazepines, unstable vital signs, or serious comorbid illness such as pancreatitis or GI bleeding.
Complications of DTs can be dramatic and life-threatening. They include:
• Cardiac arrhythmias and myocardial ischemia
• Hyperthermia
• Withdrawal seizures and status epilepticus
• Aspiration pneumonia
• Rhabdomyolysis with acute kidney injury
• Severe electrolyte disturbances, especially hypokalemia
Despite modern treatment, patients who experience DTs have significantly increased long-term mortality, with an annual mortality rate around 8%, particularly due to trauma, suicide, and other alcohol-related causes.
We close with the essential clinical pearls:
• Delirium tremens is a clinical diagnosis—treat immediately
• Prior DTs strongly predict recurrence
• Withdrawal severity increases with repeated episodes due to kindling
• Benzodiazepines are the cornerstone—very high doses may be required
• Thiamine must be given before glucose
• Always search for concurrent illness such as infection, trauma, or electrolyte abnormalities
Recognize the timeline, treat early, and sedate aggressively—because in delirium tremens, rapid intervention transforms a historically lethal syndrome into a manageable medical emergency.
In this episode of Hospital Medicine Unplugged, we sprint through urticaria—recognize the wheal, distinguish acute from chronic disease, uncover autoimmune drivers, and step through a modern treatment ladder that now includes biologics and BTK inhibitors.
We start with the definition and epidemiology. Urticaria is characterized by transient pruritic wheals, angioedema, or both, typically resolving within 24 hours without scarring. While about 20% of people experience urticaria at some point in life, chronic spontaneous urticaria (CSU) affects roughly 1% of the population and disproportionately affects women aged 30–50.
The key classification hinges on duration.
• Acute urticaria: symptoms lasting <6 weeks
• Chronic urticaria: symptoms ≥6 weeks
Fortunately, progression from acute to chronic disease occurs in fewer than 8% of cases. Risk factors for chronicity include antithyroid antibodies and poor response to antihistamines.
Next comes an important shift in our understanding of etiology. Historically, chronic urticaria was labeled “idiopathic” in most cases. We now know that more than half of patients actually have autoimmune disease mechanisms.
Two major autoimmune endotypes exist:
Type I autoimmune (autoallergic) CSU
• IgE autoantibodies against autoantigens such as thyroid peroxidase or IL-24
• Leads to mast-cell activation similar to allergic disease
Type IIb autoimmune CSU
• IgG autoantibodies against IgE or the FcεRI receptor
• Identified in about 8–10% of patients using strict diagnostic criteria
These immune mechanisms explain why less than 35% of CSU cases truly lack detectable autoantibodies.
Diagnosis is largely clinical but follows the “7C” framework:
Confirm diagnosis, identify causes, assess cofactors, evaluate comorbidities, assess consequences, evaluate biomarkers, and monitor disease course.
Routine laboratory testing should remain minimal unless clinical clues suggest otherwise. Recommended baseline tests include:
• CBC with differential
• ESR or CRP
• TSH
Certain red flags should trigger referral or further evaluation:
• Wheals lasting >24 hours
• Residual hyperpigmentation after lesions resolve
• Angioedema lasting several days without hives
• Systemic symptoms such as fever, arthralgia, or abdominal pain
To measure disease activity, clinicians rely on validated tools.
The Urticaria Activity Score over 7 days (UAS7) is the gold standard. Patients record itch severity and hive count twice daily, producing a score from 0 to 42.
Interpretation:
• 0: urticaria-free
• 1–6: well controlled
• 7–15: mild
• 16–27: moderate
• 28–42: severe
The Urticaria Control Test (UCT) is another practical tool. A score ≥12 indicates good control, while <12 suggests poorly controlled disease.
Management follows a stepwise escalation strategy.
Step 1: Second-generation H1 antihistamines
Agents include cetirizine, loratadine, fexofenadine, levocetirizine, and desloratadine, taken daily rather than as needed. About 40% of patients achieve meaningful symptom reduction with standard dosing.
Step 2: Dose escalation
If symptoms persist, antihistamine doses can be increased up to fourfold. Evidence suggests quadrupling the dose of a single antihistamine is more effective than combining multiple agents.
Step 3: Omalizumab
For antihistamine-refractory disease, omalizumab 300 mg every 4 weeks is the standard biologic therapy. Clinical trials show complete remission (UAS7 = 0) in about 36% of patients, with substantially higher response rates in real-world practice.
Patients with incomplete response may require higher doses or shorter dosing intervals.
Step 4: Cyclosporine
For patients who fail omalizumab, cyclosporine (3–5 mg/kg/day) can improve symptoms in more than half of cases, although monitoring for renal toxicity and hypertension is essential.
Short courses of systemic corticosteroids (20–50 mg/day for <10 days) may help during severe flares but should never be used long-term.
The treatment landscape is expanding rapidly.
Two newly approved therapies include:
Dupilumab
An IL-4 receptor α antagonist approved in 2025 for patients ≥12 years with persistent CSU despite antihistamines. Clinical trials showed complete response in about 31% of patients.
Remibrutinib
A Bruton tyrosine kinase (BTK) inhibitor approved in 2025 that targets mast-cell signaling downstream of FcεRI activation. Trials demonstrated complete response rates up to ~42% with rapid onset within 2 weeks.
Another critical diagnostic consideration is urticarial vasculitis, which can mimic CSU but requires a different approach.
Key distinguishing features include:
• Lesions lasting >24 hours
• Pain or burning rather than itching
• Residual hyperpigmentation after resolution
Systemic symptoms such as fever, arthralgia, or eye inflammation further increase suspicion. Diagnosis is confirmed by skin biopsy showing leukocytoclastic vasculitis.
Finally, we talk prognosis. Chronic spontaneous urticaria is often self-limited but unpredictable.
Remission rates:
• 17% at 1 year
• 45% at 5 years
• 73% at 20 years
Average disease duration is 1–4 years, though relapse occurs in up to one-third of patients.
We close with the key clinical pearls:
• Chronic urticaria is rarely allergic—most cases involve autoimmune mast-cell activation
• High-dose antihistamines (up to four times standard dosing) are guideline-recommended and safe
• Omalizumab remains the cornerstone biologic therapy
• Cyclosporine, dupilumab, and BTK inhibitors expand options for refractory disease
• Always consider urticarial vasculitis when lesions last longer than 24 hours
Recognize the pattern, escalate therapy systematically, and remember—modern immunologic therapies are transforming outcomes for patients with chronic urticaria.
In this episode of Hospital Medicine Unplugged, we sprint through acute exacerbation of interstitial lung disease (AE-ILD)—recognize the sudden decline, rule out infection and cardiac causes, support oxygenation, and navigate a disease with limited treatment options and high mortality.
We begin with the diagnostic framework defined by the 2016 International Working Group. Acute exacerbation is characterized by rapid respiratory deterioration within about 1 month, accompanied by new bilateral ground-glass opacities or consolidation on CT superimposed on pre-existing fibrotic lung disease, with no evidence of cardiac failure or fluid overload. Importantly, this definition now applies across fibrosing interstitial lung diseases, not just idiopathic pulmonary fibrosis (IPF).
The critical bedside principle: AE-ILD is a diagnosis of exclusion. Infection, pulmonary embolism, pneumothorax, and heart failure must be aggressively ruled out because they can mimic exacerbations and require completely different management.
Next, we turn to pathobiology and why these patients deteriorate so rapidly. Acute exacerbations often represent diffuse alveolar damage superimposed on chronic fibrosis, producing a clinical picture similar to ARDS. However, in some non-IPF ILDs, organizing pneumonia patterns are more common—one reason those patients may respond better to immunosuppressive therapy.
Treatment remains challenging because no therapy has definitively proven benefit in randomized trials. Corticosteroids remain the most widely used intervention, but evidence is mixed.
Recent data suggest a key difference between ILD subtypes. In non-IPF ILD, higher-dose corticosteroids (>1 mg/kg prednisone equivalent) have been associated with improved survival and lower 90-day mortality. Early tapering—reducing doses by more than 10% within the first two weeks—may further improve outcomes.
In contrast, IPF exacerbations respond less predictably, and some studies suggest high-dose steroids may increase mortality, likely because the underlying pathology is often diffuse alveolar damage rather than steroid-responsive inflammation.
One therapy that should not be used is cyclophosphamide combined with steroids, which has been shown to increase mortality in acute exacerbations of IPF.
Respiratory support becomes the next critical decision point. Many patients develop severe hypoxemic respiratory failure, but outcomes with invasive mechanical ventilation are poor.
Across multiple studies:
• In-hospital mortality ranges from 66–79% in ventilated ILD patients
• Only ~20% of ventilated IPF patients survive to hospital discharge
Ventilator management therefore focuses on lung-protective strategies, similar to ARDS care:
• Low tidal volumes
• Plateau pressures ≤30 cm H₂O
• Avoid excessive PEEP, which has been associated with worse outcomes
• Careful fluid management to prevent worsening pulmonary edema
Because survival after intubation is so limited, early discussions about goals of care are essential. Noninvasive ventilation or high-flow nasal oxygen may be appropriate for selected patients who decline intubation.
Prevention is therefore critically important. Antifibrotic therapies have significantly reduced exacerbation risk in IPF.
Two major agents are used:
• Nintedanib – shown in the INPULSIS trials to reduce the risk of acute exacerbations
• Pirfenidone – also associated with lower exacerbation rates in multiple studies
Meta-analyses show antifibrotics reduce the risk of acute exacerbations by roughly 37%, and nintedanib has also been approved for progressive fibrosing ILDs beyond IPF, after the INBUILD trial demonstrated substantial slowing of lung function decline.
New therapies are also emerging. The FIBRONEER-ILD trial studied nerandomilast, a novel PDE-4 inhibitor, and although the composite endpoint did not reach statistical significance, the study demonstrated a meaningful reduction in mortality, suggesting a potential future role in progressive pulmonary fibrosis.
Another key strategy is early referral for lung transplantation, particularly in patients with progressive fibrotic disease. Acute exacerbations can occur unpredictably and often represent a terminal event in advanced ILD, making transplant evaluation crucial before severe deterioration occurs.
We close with the key system moves for inpatient teams:
• Recognize sudden respiratory decline in patients with fibrotic lung disease
• Confirm new bilateral ground-glass opacities on CT
• Aggressively rule out infection, pulmonary embolism, and heart failure
• Consider corticosteroids, particularly in non-IPF ILD
• Use lung-protective ventilation if respiratory failure develops
• Discuss prognosis early and involve palliative care
• Ensure patients with fibrotic ILD are on antifibrotic therapy when appropriate
Acute exacerbation of ILD remains one of the most devastating events in pulmonary medicine—but early recognition, careful supportive care, and preventive antifibrotic therapy can meaningfully improve outcomes.
In this episode of Hospital Medicine Unplugged, we sprint through status epilepticus—stop the seizure fast, escalate therapy on time, protect the brain, and treat the cause before refractory disease sets in.
We begin with the modern definition that changed emergency care. Status epilepticus is now defined as ≥5 minutes of continuous seizure activity or ≥2 seizures without return to baseline. The old 30-minute threshold is obsolete because neuronal injury and benzodiazepine resistance begin early, driven by GABA receptor internalization within minutes of sustained seizure activity. That’s why treatment must begin within the first 5–10 minutes.
The stakes are high: incidence is 10–40 per 100,000 annually, with 10–20% adult mortality, rising sharply in refractory cases, elderly patients, and acute symptomatic etiologies such as stroke or hypoxic injury.
Next comes the first-line intervention—benzodiazepines within 5–10 minutes. These remain Level A evidence therapy and terminate seizures in roughly 65–70% of cases when given promptly and at adequate doses.
Three effective options:
• IV lorazepam 0.1 mg/kg (max 4 mg), may repeat once
• IM midazolam 10 mg (0.3 mg/kg in children) — preferred if IV access unavailable
• IV diazepam 0.15 mg/kg, may repeat once
The biggest real-world mistake isn’t drug choice—it’s delay and underdosing.
If seizures persist, move quickly to second-line “urgent control” therapy (10–20 minutes). The landmark ESETT trial compared levetiracetam, fosphenytoin, and valproate in benzodiazepine-refractory status epilepticus and fundamentally changed practice.
The key finding: all three drugs work equally well, stopping seizures in about 47–52% of patients.
Recommended doses:
• Levetiracetam 60 mg/kg (max 4500 mg)
• Fosphenytoin 20 mg PE/kg
• Valproate 40 mg/kg
Because efficacy is equivalent, patient factors guide the choice:
• Cardiac disease → avoid fosphenytoin (hypotension/arrhythmia risk)
• Pregnancy or liver disease → avoid valproate
• Simplest safety profile → levetiracetam
Other alternatives include lacosamide or phenobarbital, though ESETT drugs remain the most widely used.
When seizures continue despite these steps, the patient has entered refractory status epilepticus, which occurs in 23–43% of cases. At this stage, escalation means ICU care, intubation, and continuous EEG monitoring.
Third-line therapy involves continuous anesthetic infusions designed to suppress cortical activity:
• Propofol (20–200 mcg/kg/min) — rapid onset but risk of propofol infusion syndrome with prolonged use
• Midazolam infusion — commonly used but tachyphylaxis develops
• Pentobarbital coma — powerful seizure suppression but high rates of hypotension and prolonged sedation
Most modern practice favors propofol or midazolam over barbiturate coma.
A newer strategy gaining traction is ketamine, an NMDA receptor antagonist with a completely different mechanism from GABAergic drugs. Unlike other anesthetics, ketamine preserves blood pressure and respiratory drive, making it a useful adjunct in refractory disease.
If seizures continue ≥24 hours despite anesthetic therapy, the condition becomes super-refractory status epilepticus, a devastating scenario with mortality approaching 40–50%.
Management expands to include:
• Ketamine infusions
• Immunotherapy (steroids, IVIG, plasmapheresis) when autoimmune etiologies are suspected
• Ketogenic diet
• Neuromodulation or epilepsy surgery in select cases
Two particularly challenging syndromes fall into this category:
NORSE (New Onset Refractory Status Epilepticus) and FIRES (Febrile Infection-Related Epilepsy Syndrome), often requiring aggressive immunologic treatment.
Throughout all stages, clinicians must identify and treat the underlying cause—the strongest determinant of outcome.
Prognosis varies dramatically depending on response to therapy:
• Benzodiazepine-responsive SE: <5% mortality
• Second-line responsive SE: ~10–15% mortality
• Refractory SE: 20–40% mortality
• Super-refractory SE: up to 50% mortality
Poor outcomes are associated with older age, acute symptomatic etiologies (stroke, infection, hypoxic injury), prolonged seizures, and nonconvulsive status epilepticus with coma, which is one of the strongest predictors of mortality.
Finally, systems matter. Hospitals that implement structured status epilepticus protocols dramatically improve outcomes. Protocol adherence reduces time to second-line therapy from nearly an hour to about 20 minutes and lowers ICU transfer rates.
We close with the practical escalation algorithm every inpatient team should know:
• 0–5 minutes: ABCs, glucose check, IV access, prepare meds
• 5–10 minutes: benzodiazepine (lorazepam, midazolam, or diazepam)
• 10–20 minutes: second-line AED (levetiracetam, fosphenytoin, or valproate)
• 20–40 minutes: prepare for intubation, initiate EEG monitoring
• Refractory SE: continuous anesthetic infusion in the ICU
• Super-refractory SE: ketamine, immunotherapy, ketogenic diet, or surgical options
Treat fast, escalate early, follow the algorithm, and search relentlessly for the underlying cause. In status epilepticus, every minute of uncontrolled seizure activity threatens the brain.
In this episode of Hospital Medicine Unplugged, we sprint through measles—one of the most contagious infectious diseases known—covering transmission, classic clinical presentation, complications, diagnosis, and prevention through vaccination.
We start with the big picture. Measles (rubeola) is a highly contagious viral illness caused by a paramyxovirus and remains a major global public health concern despite the availability of an effective vaccine. Clinically, the disease is defined by fever, cough, coryza, conjunctivitis, and a characteristic maculopapular rash.
After decades of progress toward elimination, measles has resurged worldwide. Global cases increased dramatically from about 132,000 cases in 2016 to nearly 870,000 in 2019, driven by large outbreaks and declining vaccination coverage. Pandemic-related disruptions to immunization programs worsened the problem, with global first-dose vaccine coverage dropping to 81% in 2021—the lowest level in more than a decade. Even in the United States, outbreaks continue to occur, with the vast majority of cases seen in unvaccinated individuals or those with unknown vaccination status.
The reason measles spreads so easily is its extraordinary transmissibility. The virus spreads through airborne respiratory droplets, and viral particles can remain suspended in the air for up to two hours after an infected person leaves the area. The basic reproduction number (R₀) is estimated at 12–18, meaning a single infected person can transmit the virus to more than a dozen susceptible individuals.
The incubation period is typically 10–14 days, and patients become contagious about four days before the rash appears and remain infectious until four days after rash onset. Because measles spreads so efficiently, achieving herd immunity requires at least 95% vaccine coverage with two doses.
Clinically, measles follows a predictable three-phase course.
First is the prodromal phase, lasting about 2–4 days. Patients develop high fever along with the “three Cs”: cough, coryza, and conjunctivitis. A key diagnostic clue during this phase is the appearance of Koplik spots—small bluish-white lesions on the buccal mucosa, which are considered pathognomonic and typically appear 1–2 days before the rash.
Next comes the exanthem phase, when the classic erythematous maculopapular rash appears. The rash begins on the face and hairline, then spreads downward to the trunk and extremities over several days. It often becomes confluent on the face and upper body before gradually fading in the same order that it appeared.
The final stage is the convalescent phase, during which symptoms gradually resolve, typically within about one week after rash onset in uncomplicated cases.
Despite this classic presentation, measles is far from benign. Approximately 30–40% of patients develop complications, particularly among infants, adults, pregnant individuals, immunocompromised patients, and malnourished children.
Common complications include:
• Otitis media (about 7–9% of cases)
• Pneumonia, the leading cause of measles-related death
• Diarrhea and dehydration
More severe complications involve the central nervous system. Acute postinfectious encephalitis occurs in roughly 1 in 1,000 cases and carries a 20% mortality rate. Immunocompromised patients may develop measles inclusion-body encephalitis, which is nearly always fatal.
One of the most devastating complications is subacute sclerosing panencephalitis (SSPE)—a progressive degenerative brain disease that develops 7–10 years after infection. Although rare, it is universally fatal within several years of onset.
Measles also produces a phenomenon known as immune amnesia. The virus suppresses immune memory for 2–3 years, increasing vulnerability to other infections and contributing to excess mortality even after recovery from the acute illness.
Diagnosis is often suspected clinically when patients present with fever, rash, and the three Cs, especially if Koplik spots are observed. However, laboratory confirmation is essential—particularly during outbreaks or in previously vaccinated individuals who may develop modified measles with milder symptoms.
Diagnostic testing includes:
• RT-PCR detection of viral RNA
• Measles IgM serology using ELISA
• Viral genotyping for epidemiologic surveillance
Management of measles is primarily supportive, as no specific antiviral therapy exists. Treatment focuses on hydration, fever control, nutritional support, and treatment of secondary bacterial infections.
One intervention with strong evidence is vitamin A supplementation. Clinical trials have demonstrated that vitamin A significantly reduces measles-related mortality, particularly in children. Current recommendations are to administer two age-adjusted doses of vitamin A on consecutive days, with an additional dose given later if vitamin A deficiency is suspected.
Ultimately, the most effective strategy against measles is prevention through vaccination. The MMR vaccine provides highly effective protection, with two doses achieving approximately 97% immunity.
We close with the key clinical takeaways:
• Measles is among the most contagious infectious diseases known
• The classic presentation includes fever, cough, coryza, conjunctivitis, Koplik spots, and a descending maculopapular rash
• Complications occur in up to 40% of cases and can include pneumonia and encephalitis
• Vitamin A supplementation significantly reduces mortality in children
• Maintaining ≥95% vaccination coverage is essential for herd immunity
Measles is often viewed as a childhood disease of the past—but as vaccination rates decline in some regions, clinicians must remain vigilant. Recognizing measles quickly, isolating cases, and promoting vaccination remain essential tools for preventing outbreaks and protecting vulnerable populations.
In this episode of Hospital Medicine Unplugged, we sprint through aplastic anemia—recognize the pancytopenia, confirm marrow failure, suppress the immune attack, and watch for clonal evolution.
We open with the diagnostic framework that defines disease severity. The Camitta criteria remain the standard classification. Severe aplastic anemia requires bone marrow cellularity <25% plus at least two of three cytopenias:
• ANC <500/μL
• Platelets <20,000/μL
• Reticulocytes <60,000/μL
Very severe disease is defined by ANC <200/μL, a group at extremely high infection risk. These criteria matter because treatment decisions hinge on severity, patient age, and donor availability.
Next, we unpack the pathophysiology—the immune system attacking hematopoiesis. In acquired aplastic anemia, cytotoxic T cells target hematopoietic stem and progenitor cells (HSPCs). These T cells release interferon-γ and TNF-α, which activate Fas/Fas-ligand pathways, triggering apoptosis of stem cells and collapsing bone marrow production.
Recent discoveries explain how immune escape and clonal selection occur. IFN-γ suppresses HSPCs expressing HLA class I molecules, while HLA-deficient clones evade immune destruction, creating a survival advantage in about 30% of patients. Meanwhile, regulatory T cells are decreased, and their restoration often parallels hematologic recovery.
Then comes one of the biggest therapeutic advances in decades: adding eltrombopag to immunosuppressive therapy. Standard treatment has long been horse antithymocyte globulin (ATG) plus cyclosporine, but the RACE trial changed the landscape.
When eltrombopag was added:
• Complete response at 3 months doubled (22% vs 10%)
• Overall response at 6 months increased to 68% vs 41%
• Median time to response shortened from 8.8 months to 3 months
Because of these results, ASH 2025 guidelines now recommend eltrombopag alongside ATG and cyclosporine for severe and very severe aplastic anemia in both adults and children. Beyond thrombopoiesis, eltrombopag appears to stimulate stem-cell recovery and counteract IFN-γ–mediated suppression.
For curative therapy, we turn to hematopoietic stem cell transplantation. Modern guidelines use age-based decision pathways:
• <20 years: matched sibling transplant or immunosuppressive therapy
• 20–40 years: matched unrelated donor transplant or immunosuppressive therapy
• >40 years: immunosuppressive therapy preferred
Haploidentical transplant is generally reserved for later lines, with immunosuppressive therapy favored initially.
Another hallmark of aplastic anemia is its relationship with paroxysmal nocturnal hemoglobinuria (PNH). Up to 50% of patients harbor a PNH clone, detectable by FLAER-based flow cytometry with CD55/CD59 testing. Importantly, the presence of a PNH clone is actually a favorable prognostic sign.
Patients with PNH clones show:
• Better response to immunosuppressive therapy (≈78% vs 50%)
• Better transplant outcomes (≈97% vs 77%)
• Lower risk of progression to myelodysplastic syndrome
The mechanism is elegant: GPI-negative stem cells with PIGA mutations escape immune destruction, allowing them to survive when normal HSPCs are targeted.
Despite improved survival, long-term complications remain a major concern. About 10–20% of patients develop clonal evolution to myelodysplastic syndrome or acute myeloid leukemia within 10 years, particularly after immunosuppressive therapy.
Genomic studies show clonal hematopoiesis in roughly half of patients, with mutation patterns predicting outcomes:
• DNMT3A and ASXL1 → higher risk of clonal expansion and progression to MDS/AML
• BCOR, BCORL1, and PIGA → better response and more stable disease
Risk factors for clonal evolution include older age, poor response to immunosuppressive therapy, high-risk mutations, and multiple mutations with higher allele burden.
When secondary myeloid malignancies occur, they often carry high-risk cytogenetics such as monosomy 7 or complex karyotypes, with ~40% five-year survival without transplant. Fortunately, allogeneic stem cell transplantation can improve survival to about 64%.
The big picture today is far more optimistic than in the past. With modern therapy—including ATG, cyclosporine, and eltrombopag—five-year survival now exceeds 80%, with some studies reporting survival approaching 97%. But relapse still occurs in about 22% of patients, making long-term monitoring essential.
We close with the system moves: confirm severe disease using Camitta criteria, suppress the immune attack with ATG + cyclosporine + eltrombopag, evaluate for transplant based on age and donor availability, screen all patients for PNH clones, and monitor closely for clonal evolution.
Aplastic anemia is no longer uniformly fatal—but recognizing the immune mechanism, treating early, and tracking clonal risk can transform outcomes for these patients.
In this episode of Hospital Medicine Unplugged, we sprint through Brugada syndrome—spot the ECG, stratify the risk, prevent sudden cardiac death, and avoid the triggers that unmask malignant arrhythmias.
We start with the ECG that makes the diagnosis. Type 1 Brugada pattern is the only diagnostic finding: coved ST elevation ≥2 mm in ≥1 right precordial lead (V1–V3) followed by a negative T wave. The 2013 consensus simplified the diagnosis—a Type 1 pattern alone (spontaneous or drug-induced) is sufficient, without requiring symptoms or family history. Type 2 (“saddleback”) pattern shows ≥0.5 mm ST elevation with a convex ST segment and positive T wave, but it is only suggestive, not diagnostic.
Next comes a critical inpatient pearl: Brugada ECG patterns are dynamic. They may appear and disappear and are often unmasked by fever, sodium-channel blockers, increased vagal tone, or post-prandial states. This is why hospitalized patients with unexplained syncope or fever should have careful ECG review—the pattern may only be visible transiently.
Risk stratification drives management and ICD decisions, which is one of the hardest parts of Brugada care.
High-risk patients (Class I indication for ICD):
• Prior cardiac arrest or documented sustained ventricular arrhythmia
• Spontaneous Type 1 ECG with syncope presumed due to ventricular arrhythmia
These patients face annual event rates of roughly 5–10%, making ICD therapy life-saving.
Intermediate risk:
• Asymptomatic spontaneous Type 1 pattern (≈0.5–1.2% annual risk)
• Syncope with spontaneous Type 1 pattern (≈6–19% event risk over ~2–3 years)
• Fever-induced Type 1 ECG pattern
Low risk (ICD not indicated):
• Asymptomatic patients with only drug-induced Type 1 pattern
• Event rate <0.5% annually
A key myth to bust: family history alone does NOT reliably predict individual arrhythmic risk.
We also review BRUGADA-RISK, a modern clinical risk model incorporating ECG and clinical variables to estimate 5-year arrhythmic risk, showing strong predictive performance with ~71% sensitivity and ~80% specificity at a 10% risk threshold.
Then comes the debate over electrophysiology studies (EPS). Current guidelines give Class IIb support for programmed ventricular stimulation in asymptomatic patients with spontaneous Type 1 ECG. Inducible arrhythmias roughly double the risk of events, but the absence of inducibility does not guarantee safety, so clinical history still matters most.
When prevention matters most, remember the bottom line: ICD implantation is the only proven therapy that prevents sudden cardiac death in Brugada syndrome.
Class I indications for ICD:
• Survivors of cardiac arrest
• Documented spontaneous sustained VT
• Spontaneous Type 1 ECG with arrhythmic syncope
For patients with recurrent ICD shocks, escalation strategies include quinidine therapy or catheter ablation targeting abnormal epicardial substrate in the RV outflow tract.
Hospital teams must also know the medications that worsen Brugada. Drugs that block cardiac sodium channels can unmask the ECG pattern and trigger ventricular arrhythmias. High-risk categories include Class IC antiarrhythmics (flecainide, propafenone), tricyclic antidepressants, lithium, certain antipsychotics, local anesthetics like bupivacaine, propofol, cocaine, and excessive alcohol. The safest move is to check brugadadrugs.org before prescribing.
Finally, one of the most important bedside pearls: fever is a powerful arrhythmic trigger in Brugada syndrome. Temperature-dependent sodium channel dysfunction can convert a silent patient into a ventricular arrhythmia emergency. That’s why aggressive antipyretic therapy is mandatory in any febrile Brugada patient—especially in children and hospitalized patients.
We close with the take-home system moves: recognize the Type 1 ECG pattern, treat fever aggressively, avoid sodium-channel-blocking medications, risk-stratify carefully, and implant ICDs in the right patients.
Brugada syndrome may hide in plain sight—but once you know the ECG and the triggers, you can identify risk early and prevent sudden cardiac death.
In this episode of Hospital Medicine Unplugged, we tackle one of the most ethically charged and clinically challenging topics in inpatient care: the use of restraints in the hospital setting. When are restraints justified, why do we still use them so often, and what does the evidence actually show about benefit versus harm?
We start by defining physical restraints—any device or method that limits a patient’s movement, from wrist and ankle restraints to vests, belts, bed rails, and enclosure beds—and chemical restraints, medications used primarily to control behavior rather than treat an underlying condition. We unpack why experts increasingly reject the term “chemical restraint,” emphasizing pharmacologic treatment of agitation aimed at calming, not sedating, patients while addressing root causes.
Next, we explore why restraints are used: fall prevention, prevention of device removal, management of delirium or agitation, and protection of staff. But here’s the paradox—observational data consistently show higher rates of the very outcomes restraints are meant to prevent, including unplanned extubations, device removal, increased agitation, delirium, and longer ICU stays.
We break down the scope of the problem. Nearly 1 in 10 hospitalized patients experiences restraint use, with rates approaching 40% of ICU encounters and even higher among mechanically ventilated patients. Use varies widely by setting, staffing, and culture—highlighting that restraint use is often system-driven, not patient-driven.
The heart of the episode focuses on ethics and law. Restraints represent a profound restriction of liberty, and ethical use requires three conditions: medical appropriateness, informed consent (or a valid emergency exception), and use of the least restrictive option. We review federal regulatory requirements—restraints only for imminent harm, after less restrictive measures fail, time-limited orders, mandatory face-to-face evaluations, continuous monitoring, and early removal.
We then confront the real harms. Physically: DVT, PE, aspiration pneumonia, fractures, pressure injuries, rhabdomyolysis, asphyxiation, and death. Psychologically: fear, loss of dignity, and PTSD, affecting up to 25–47% of patients after a restraint event. These risks rise with each additional day of restraint use.
From there, we pivot to what actually works: alternatives. Multicomponent, non-pharmacologic strategies—reorientation, sleep hygiene, pain control, early mobility, family engagement, sitters, sensory optimization, and delirium prevention bundles like ABCDEF—reduce delirium and restraint use by 40–60% while improving outcomes.
We close with practical takeaways: assess underlying causes first (pain, hypoxia, infection, withdrawal, delirium), use verbal de-escalation and environment before meds, reserve restraints for true emergencies, document meticulously, reassess relentlessly, and remove early. The bottom line: restraints are not benign, not preventive, and not routine care—they are a last resort in modern, patient-centered hospital medicine.
Fast, evidence-driven, and ethically grounded—protect safety without sacrificing dignity.
In this episode of Hospital Medicine Unplugged, we tackle dementia with behavioral and psychological symptoms (BPSD) in the hospitalized patient—why it happens, how to assess it fast, and how to manage it safely without making things worse.
We start with the big picture: BPSD affects >90% of people with dementia, often driving hospital admissions. Symptoms span agitation, aggression, psychosis, depression, anxiety, apathy, sleep disturbance, and disinhibition—and they’re not benign. In the hospital, BPSD is linked to longer stays, higher mortality, restraint use, staff injury, early institutionalization, and one-third of total dementia care costs.
Next, we walk through the do-first inpatient assessment. Rule out delirium (acute onset, fluctuating attention), then hunt for reversible triggers: pain, constipation, urinary retention, infection, hypoxia, metabolic derangements, sleep disruption, and iatrogenic harm from polypharmacy—especially anticholinergics, benzodiazepines, and opioids. Collateral history is critical to establish baseline behavior. Use structured tools like CAM/4AT for delirium, PAINAD for nonverbal pain, and NPI or CMAI to quantify symptoms. The DICE approach (Describe–Investigate–Create–Evaluate) keeps management personalized and efficient.
We emphasize that non-pharmacologic strategies are first-line—always. In the hospital, this means person-centered care: reorientation, sleep hygiene, early mobility, sensory optimization (glasses/hearing aids), hydration, toileting, nutrition, and calm communication. Caregiver- and staff-focused interventions have the strongest evidence, reducing both symptom burden and distress. Music therapy, tailored activities, exercise, massage/touch, and multicomponent delirium programs like HELP can meaningfully reduce agitation and prevent escalation.
When symptoms threaten safety, we cover how to use meds sparingly and smartly. Before adding anything, do a medication cleanup. Pharmacotherapy is time-limited, lowest dose, shortest duration, and always paired with non-drug strategies.
• Cholinesterase inhibitors can modestly improve BPSD over time.
• Antipsychotics offer small benefits for severe agitation or psychosis but carry real risks—increased mortality, stroke, sedation, EPS, QT prolongation, and functional decline. No clear winner among agents. Use hours to days, reassess daily, and document risk–benefit discussions. Avoid dopamine blockers in Lewy body dementia; if unavoidable, extreme caution.
• SSRIs help depression/anxiety; evidence for agitation is limited. Mirtazapine doesn’t help agitation.
• Benzodiazepines and valproate are generally avoid.
• Pain control matters—untreated pain fuels agitation.
We close with hospital pearls: no routine drugs for delirium; antipsychotics only for dangerous behaviors refractory to non-drug care. Plan early for deprescribing—one-third of patients started on antipsychotics in the hospital leave on them unless you stop it. At discharge, communicate what worked: triggers, de-escalation strategies, sleep plans, toileting schedules, and a clear reassessment plan. Align care with goals, dignity, and function.
Bottom line: Treat the cause, lead with non-pharmacologic care, reserve meds for safety, reassess relentlessly, and deprescribe early.
In this episode of Hospital Medicine Unplugged, we tackle one of the most anxiety-provoking inpatient consults: acute facial weakness—Bell’s palsy or stroke? We break down how to tell them apart fast, why the distinction matters, and how to manage each safely in hospitalized patients.
We start with the bedside exam that saves lives. Forehead involvement = peripheral (Bell’s palsy); forehead sparing = central (stroke)—until proven otherwise. Bell’s palsy presents with acute unilateral facial paralysis involving the forehead, often peaking within 72 hours, and may include post-auricular pain, altered taste, hyperacusis, or dry eye, without other neurologic deficits. Stroke typically hits suddenly, often spares the forehead, and comes with red flags like limb weakness, aphasia, gaze deviation, dysphagia, or altered mental status.
We walk through the don’t-miss pitfalls: brainstem strokes that mimic a lower motor neuron pattern, bilateral facial weakness, gradual or progressive onset, recurrent ipsilateral palsy, hearing loss or vertigo, and facial palsy in post-op, ICU, immunocompromised, or cancer patients—all of which demand a lower threshold for imaging and expanded workup.
Next, the inpatient diagnostic strategy. Suspect stroke? Activate the stroke alert—determine last known well, check glucose, perform a focused neuro exam, and get emergent CT/MRI. For classic Bell’s palsy, routine labs and imaging aren’t required, but in hospitalized patients consider MRI with contrast or CSF if there are atypical features, infection risk, multiple cranial nerves involved, or no improvement by 3–6 weeks.
Treatment pearls you can use today:
Bell’s palsy—start oral corticosteroids within 72 hours (prednisone 50–60 mg daily x5 days, then taper). This improves complete recovery (NNT ≈10). Antivirals alone don’t work; adding them to steroids may modestly reduce synkinesis, especially in severe paralysis. Eye protection is non-negotiable: artificial tears, nighttime ointment, and a moisture shield—early ophthalmology if exposure risk.
Stroke—time is brain. Eligible patients get IV thrombolysis and/or mechanical thrombectomy based on time and imaging, with guideline-directed blood pressure control, antithrombotics, and early rehab.
We close with prognosis and counseling. Bell’s palsy has a 70–85% complete recovery rate (higher with early steroids), but 25–40% may have residual weakness or synkinesis—plan follow-up at 3 months if recovery lags. Stroke outcomes hinge on severity and speed to reperfusion, making rapid recognition critical.
Bottom line: Examine the forehead, hunt for red flags, image early when in doubt, protect the eye, treat fast, and never miss a stroke.
In this episode of Hospital Medicine Unplugged, we get practical about single vs dual antiplatelet therapy after ischemic stroke—who gets what, for how long, and when DAPT does more harm than good.
We start by framing the landscape: noncardioembolic vs cardioembolic stroke, small-vessel vs large-artery disease, and why platelets are center stage in atherothrombotic stroke but not in AF-driven cardioembolism.
Then we walk through who actually qualifies for DAPT:
Minor noncardioembolic ischemic stroke (NIHSS ≤3)
High-risk TIA (ABCD² ≥4)
Select mild-to-moderate strokes (up to NIHSS 5) and large-artery atherosclerosis / intracranial stenosis, where data for intensified therapy are emerging.
We lay out exact protocols you can copy into your order sets:
Classic aspirin + clopidogrel: loading, maintenance, and how to transition cleanly to SAPT.
Ticagrelor + aspirin: when to prefer it (e.g., CYP2C19 loss-of-function) and how to factor in the higher bleeding signal.
Why triple therapy is a hard no.
A big chunk of the episode is “how long is long enough?”:
Why the real benefit of DAPT is front-loaded into the first 10–21 days.
How CHANCE, POINT, THALES, and meta-analyses sharpen the message: short-term DAPT cuts early recurrence; longer DAPT mainly buys bleeding.
Why most patients should land on ~21 days of DAPT, then SAPT indefinitely, and when 30–90 days might still make sense (e.g., intracranial stenosis, stenting protocols).
We also spell out when NOT to use DAPT:
Moderate–severe stroke with big infarcts and hemorrhagic risk
Cardioembolic stroke (AF, LV thrombus, valvular disease) where anticoagulation wins
Lacunar stroke, where SPS3 showed more bleeding without benefit
Patients with high bleeding risk or prior GI bleed, thrombocytopenia, or hemorrhagic transformation
ESUS and other gray zones where DAPT has no proven upside.
Finally, we zoom out to long-term secondary prevention:
Choosing between aspirin, clopidogrel, and aspirin–dipyridamole
Why clopidogrel often has the best net clinical profile (similar efficacy, less major bleeding)
How to build a stroke unit habit: NIHSS + ABCD² on arrival, early mechanism workup, tight DAPT stop dates, and defaulting back to SAPT instead of “set-and-forget” dual therapy.
If you’ve ever wondered “Should this patient be on DAPT, for how long, and what am I risking?” this episode gives you a crisp, evidence-based playbook you can use on your next stroke admission.
In this episode of Hospital Medicine Unplugged, we hit the brakes on routine bridging—who actually needs LMWH/UFH when you stop warfarin, and who is safer with no bridge at all?
We start by nailing the definition: bridging = temporarily swapping a long-acting oral anticoagulant (usually warfarin) for short-acting heparin (UFH/LMWH) during interruptions for procedures or bleeding. Then we zoom out to the core tension: tiny peri-procedural thromboembolic risk vs a 3–4× jump in major bleeding with bridging.
We walk through thromboembolic risk stratification—AF with CHA₂DS₂-VASc, recent VTE timing, mechanical valves, and severe thrombophilia—and pair it with procedure and patient bleeding risk (neurosurgery vs dental work, HAS-BLED factors, renal/liver disease, prior bleeds).
Then comes the evidence gut-punch:
BRIDGE: in AF on warfarin, no reduction in thromboembolism, but major bleeding triples with LMWH bridging.
Meta-analyses: no thrombotic benefit, big bleeding signal across mixed AF/VTE/mechanical valve cohorts.
PAUSE & DOAC data: rapid onset/offset means DOACs almost never need bridging.
From there we carve out the true bridging exceptions—the “maybe yes” group:
• Mechanical mitral or older-generation mechanical valves
• Very recent (<3 months) VTE or stroke/systemic embolism
• Severe thrombophilia or high-risk cancer-associated VTE
Everywhere else, guidelines increasingly say: “Don’t bridge.” Most AF, remote VTE, bileaflet mechanical AVR without extra risk factors, and all DOAC-treated patients go down a simple interrupt-and-restart pathway instead of heparin drips and syringes.
We close with a practical, ward-ready playbook:
• Step 1: Classify thromboembolic risk (AF/VTE/valve).
• Step 2: Classify procedure + patient bleeding risk.
• Step 3: If DOAC → timed hold based on drug + kidney function, no bridge.
• Step 4: If warfarin and truly very high thrombotic risk → consider LMWH/UFH, but delay/avoid post-op therapeutic dosing when bleeding risk is high.
• Step 5: Use prophylactic-dose LMWH as VTE prophylaxis, not as a stealth “mini-bridge.”
By the end, you’ll have a clean mental algorithm for “bridge vs no bridge” that lines up with ACCP, AHA/ACC, and AF guidelines—less bleeding, same stroke protection, and far fewer unnecessary heparin shots.
In this episode of Hospital Medicine Unplugged, we run the inpatient anticoagulation playbook—pick the right drug, dose it safely, and dodge both clots and bleeds.
We start with why we anticoagulate in hospital: VTE treatment and prophylaxis, AF stroke prevention, ACS, and valve/bridging scenarios—always walking the tightrope between thrombosis and bleeding. Then we map the four main drug classes:
• DOACs as default for most nonvalvular AF and VTE—rapid onset, predictable PK, no routine monitoring, but no go in mechanical valves, APS, pregnancy, or severe CKD.
• LMWH as the inpatient workhorse—VTE treatment and prophylaxis, cancer, pregnancy, lower HIT and osteoporosis risk, but renally cleared and only partially reversible.
• UFH as the rescue drug—severe renal failure, high bleeding risk, need for rapid on/off, thrombolysis, or upcoming procedures.
• Warfarin for the “can’t DOAC” crowd—mechanical valves, rheumatic MS, triple-positive APS, advanced CKD/liver disease—with INR targets, monitoring, and all the interaction baggage.
From there we build a clinical decision framework:
• Renal and hepatic function drive DOAC vs LMWH vs UFH vs warfarin.
• Bleeding risk (HAS-BLED, IMPROVE) and prior GI bleed shape how hard you push.
• Drug–drug interactions (CYP3A4/P-gp, polypharmacy, antiplatelets/NSAIDs) push you toward or away from certain DOACs or warfarin.
• Extremes of body weight, procedures on the horizon, and the need for a reversal plan all feed into the choice.
For VTE treatment, we compare:
• Stable DVT/PE with good kidneys → DOAC first (EINSTEIN, AMPLIFY, Hokusai).
• Cancer-associated VTE → DOACs vs LMWH: DOACs now common, but LMWH still preferred for GI/GU tumors or very high bleeding risk.
• Severe CKD or tenuous bleeder → UFH infusion with aPTT-guided titration and protamine in your back pocket.
• Mechanical valves or APS → warfarin with bridging.
For VTE prophylaxis, we keep it practical:
• LMWH (enoxaparin/dalteparin) as first-line in most medical/surgical inpatients.
• UFH for CrCl <30 or when you need a super short half-life.
• DOACs (rivaroxaban, betrixaban) as select tools for extended prophylaxis, not routine inpatient starters.
• Warfarin stays out of acute prophylaxis.
We also tackle cardiology use cases:
• UFH (or bivalirudin) for ACS/PCI.
• Enoxaparin as an alternative in NSTE-ACS/unstable angina when managed medically.
• DOACs and warfarin move to the long-game: AF, LV thrombus, and post-ACS patients who need both antiplatelets and anticoagulation.
Finally, we walk through special populations:
• Severe CKD → UFH first; dose-reduced LMWH or off-label apixaban only with eyes wide open.
• Morbid obesity or very low weight → how we think about fixed-dose DOACs, weight-based LMWH, and when to consider anti-Xa levels.
• Pregnancy → LMWH only, no DOACs or warfarin.
• Cancer → choosing between DOACs and LMWH based on tumor site, bleeding risk, kidneys, and patient preferences.
We close with a bedside checklist: define the indication, check kidney/liver function, estimate bleeding risk, scan the med list for interactions, ask “how fast do I need on/off?”, then lock in an agent, dose, monitoring plan, and a clear reversal strategy. Right drug, right patient, right moment—while keeping both clots and bleeds off your service list.



