Prefer urlsplit over urlparse to destructure URLs
TIL from this1 video that Python’s urllib.parse.urlparse2 is quite slow at parsing URLs. I’ve always used urlparse to destructure URLs and didn’t know that there’s a faster alternative to this in the standard library. The official documentation also recommends the alternative function. The urlparse function splits a supplied URL into multiple seperate components and returns a ParseResult object. Consider this example: In [1]: from urllib.parse import urlparse In [2]: url = "https://httpbin.org/get?q=hello&r=22" In [3]: urlparse(url) Out[3]: ParseResult( scheme='https', netloc='httpbin.org', path='/get', params='', query='q=hello&r=22', fragment='' ) You can see how the function disassembles the URL and builds a ParseResult object with the URL components. Along with this, the urlparse function can also parse an obscure type of URL that you’ll most likely never need. If you notice closely in the previous example, you’ll see that there’s a params argument in the ParseResult object. This params argument gets parsed whether you need it or not and that adds some overhead. The params field will be populated if you have a URL like this: ...