That’s a known issue:
https://github.com/markdown-it/markdown-it/issues/38
It’s possible to fix, but not easy. Workaround available.
Correct solution is to make linkifier part of tokenizer process. That’s expensive (for example, email lookahead check for every character). Tradeoff is to listen :
then do look behind for http(s)
, and lookahead for the rest. That’s not universal, but will cover all real cases:
- http/https links will be parsed with other tokens, with higher priority than emphasis
- everything else will be detected via text scan & regexps (as linkifier works now), probability of collision is very low.
I have no plans to do this, but if anyone wish to implement - see explanation above. Or use < >