Topic "popular links" panel domain extraction doesn't handle country TLDs

It looks as though the domain extraction logic doesn’t understand country TLDs in domains – so it’s considering as a domain, rather than the more-appropriate that the link uses.

This is, of course, a really tiny issue – at worst, it’s a bit confusing or meaningless. :slight_smile:


Our current logic only extracts the last two level in the domain name:

I think it’ll be easier and clearer if we just show the domain instead of trying to figure out what the root domain is.


I agree that simply showing the full domain is probably okay.

I you want to keep the current “identify the actual domain” behavior, the Public Suffix List is probably a good place to get started :slight_smile:


I vote against carrying a giant library or case statement just to remove a www once in a while.

My vote is to simply show the domain and do away with this magic.

If we MUST … keep the magic for domains that end with .com and .org


Fixed in


Hmm can you provide some examples of old and new here?

1 Like

This is the new version where we show the full domain.

1 Like

Hmm that’s pretty nasty… can’t say I am a fan.

Couldn’t we have a simple regex that allows a few 2 and 3 letter dotted phrases at the end?


TLDs are a pain though, if they are long like… stuff is gonna break. I guess the general logic would be

  • grab the rightmost period and word chars next to it
  • if it is too short, grab the next leftward period and word too

This would handle as it is clearly way too short to be a real domain. is also too short, I think. so the threshold is “must be more than 7 chars with just one period”

I did some more research and it seems like the only practical way to this problem is to match the domains against the Public Suffix List

If we want to, we could include the list server side, only 188kb, and send it down to the client.


Why does the client need this? the server can just send the split off domain and handle doing that in the serializer.

My issue with public suffix gem though is that it bloats the ruby process with A LOT of strings, this file is big and stored in memory, 1 rvalue per domain minimum


Oops what I meant is we will determine the domain name server side. I don’t mean send down the entire list :sweat_smile:

I am totally open to including public suffix if we build a simple gem that uses to perform these lookups :slight_smile: should only take a day or so to build and will help the entire Ruby community.

I am strongly against carrying the ruby implementation here that is a memory hog (and add tons of RVALUES into our heaps)


This is pointless @tgxworld – can you explain why my simple suggested logic is not sufficient? I don’t see why we need to check “real” tlds.

I discussed this with him and there are mountains of edge cases. (should pick - is a public suffix , (should pick as its not a public suffix) (should pick (should pick is a public suffix

something has to give here or we will junk the wrong part… its nice to properly attribute domains and shorten as much as possible.

For context, it appears hacker news follow public suffix rules.


My logic covers all the listed cases.

I disagree that showing vs is incorrect.

The whole point is that you want a hint of where you will be going, there is no rule saying it must be perfectly predictive. Showing and is correct in this case.

I don’t agree it is correct, the whole reason for public suffix is so “blogger” and various other providers can provide “public suffixes”. That way it is clear that you are linking to my blog vs some random blog on blogger.

There are plenty of examples of public suffixes,, blogger, japan seem to be really into this and the list goes on and on.

I am fine to shelf this as too hard for now, but the regex you have there is way optimistic. If we are going to hack this I would just special case to

  • Take last 3 parts

eg: (yellow pages in Israel) would show up as which is back to square one here.


That is NOT the point, the point is

where does this go?


where does this go?

The fact that it goes to tells me it’s a blog, the top level domain this will lead me to if I click. That’s what I needed to know, I do NOT need to know that it goes to

You are scope creeping the feature far beyond what was intended and I strongly disagree. I believe the simple heuristic I described:

  • grab the rightmost period and word chars next to it
  • if it is too short (7 chars or less), grab the next leftward period and word too

… not a regex but an if-then … will be good enough, and more analogous to what was already there versus hidden scope creeping this up to perfect.

You are missing my bigger point, you are suggesting a very aggressive regex, if we want to cut corners and do a shortcut here, then fine.

I am fine with a shortcut that culls domains to three parts [part 1].[part 2].[part 3]

  • I prefer to err on the side of caution here which is particularly good for international domain and always take last 3 parts. This adds more text but is a lot less edge casey with international domains. … yes this sucks for but is good for, and lots of other short internationals.

  • You are suggesting aggressively culling out [part 1], which works fine for .com and .org domains and a lot less friendly to and domains and so on.


Just reread the algorithm suggested, always fill up a buffer to a minimum of 7 chars picking up to 3 segments may work.

Not suggesting a regex at all. Just simple logic based on periods and string length.

  1. Locate the rightmost period → .
  2. Add all non-period characters to the right and left of it →
  3. Is this string more than 7 chars? If yes, you are done. If not, add the leftmost period and leftmost non-periods →

And for

  1. .
  3. done, string is > 7 chars

The problem here is that there is no good length that we can use to get all the cases right.

Let’s take for example,

The right output we should get is

Where does this go?

Note that just displaying or is incorrect here because it is as good as displaying where we don’t provide any indication of where the site is going.

Assuming we determine that 7 chars is a good length, the heuristic algorithm will only produce which is not what we want. Just to get this case right, the length that we use will have to be 17 chars excluding the periods and we have to start considering the number of periods in the domain. If we bump the number of chars too much, we’ll end up displaying the full domain like which brings us back to square one.