While debugging a mid-power push-pull isolated supply (~100W range), I’ve once again hit the classic wall:
Runs fine under light load
MOSFET temps shoot up under full load
EMI spikes appear around 180-250MHz during pre-compliance checks
The frustrating loop:
-
Adding snubbers reduces EMI but increases losses, heating up the MOSFETs
-
Using faster switches improves efficiency but spikes EMI
-
Adding shielding lowers EMI but traps more heat
-
Slowing down switching edges reduces EMI but increases switching losses
It feels like an endless tug-of-war between noise suppression and thermal management.
What I’ve Tried:
Careful PCB layout with short loops, 4-layer stackup, partitioned grounds
Input π-filters and output LC filters (helpful but limited)
Snubbers (RC/D) to tame spikes but increase heating
Tweaking dead-time to reduce cross-conduction but seeing EMI increase
Thermal simulations and improved heatsinks/airflow, but thermal margins remain tight
I’d like to ask the community:
In your push-pull or half-bridge designs, how do you systematically balance EMI suppression with thermal management?
What are your early debug strategies for quickly identifying hidden EMI sources?
How do you find the sweet spot between “fast dv/dt” and acceptable EMI?
Have you encountered this “efficiency vs. heat vs. EMI” triangle in similar power ranges, and how did you solve it?
Why it matters:
In the ~100W push-pull range (portable devices, radios, drones), we often need to:
Maintain stable output
Pass EMI compliance
Survive continuous full-load operation in the field
But getting all three simultaneously is much harder than theory suggests.
If you’ve had your own “failures” and “fixes” while tuning high-frequency switchers, please share your story here.
It would help build a real-world reference for others facing these exact headaches, so we can debug smarter, not harder.
Thanks!