There is a need to track situations when a plan is cached for a query with parameters that returns a small number of rows and this plan is pulled up to a query with heavy parameter.
There is an XEvent
Occurs when the smaller of the estimated or actual cardinality of an
iterator is at least 1 million rows and if the estimated and actual
cardinality differ by at least 1 order of magnitude.
But from the description the estimate needs to be at least 1 million rows in the first place in which actual would need to be >= 10 million so probably not that useful for typical problem queries in transactional workloads.
You can also look at
inaccurate_cardinality_estimate (in the "Debug" channel so not listed in the UI by default)
Occurs when an operator outputs significantly more rows than estimated
by the Query Optimizer. Use this event to identify queries that may be
using sub-optimal plans due to cardinality estimate inaccuracy. Using
this event can have a significant performance overhead so it should
only be used when troubleshooting or monitoring specific problems for
brief periods of time.
My preferred approach to query tuning is just to investigate the queries that are using most resources or are particularly problematic. Rather than mining for specific issues. You should then see organically cases where this is causing an actual problem and be able to fix the cases with most bang for buck.