Unfortunately, the “log every search the server makes” approach is incorrect as we perform searches as people are typing which will result in a massively noisy log.
Log on server with the following algorithm on search
UPDATE term
SET term = :new_term
created_at: :now
WHERE created_at < 5.seconds.ago AND
position(term in :new_term) = 0 AND
(user_id = :user_id OR ip_address = :ip_address)
term: new_term,
now: Time.zone.no,
ip_address: request.ip
If update touches zero rows, then insert a **new** search log row
Or, in English
Update existing search log row IF:
Same user (for anon use ip address, for logged in use user_id)
Previous search started with the text of current search, eg: previous was “dog” and new is “dog in white”
Previous search was logged less than 5 seconds ago
On click on search result (in either full page search or header) update the clicked_topic_id, (have search results return log id, then update it based on log id + user match + in last 10 minutes)
Limiting log size
So the log does not grow forever there should be a site setting for maximum rows to store. Default should be about a million.
So if I search for something and open three results in three tabs, what happens? Last one “wins”?
The behavior of “multiple clicks for one search, save last only” is better for times when someone searches, clicks, hits back, clicks again, hits back, clicks a final time. New tabs subvert that though.
If I search for a topic, right click to copy URL, and then paste that in as a “you should look here” answer, is that counted as a “click”?
Copying URLs from posts does not increase click count, for reference. But this is a case when the desired search result really should be saved.
I’m not sure what you mean by “how they work” but as far as for the “descriptions” those are “column definitions” or “schema information” and are recognizable as such for those experienced with database tables.
From there you can search for variable and function names etc to find other files that interact with the table (a good IDE or text editor helps a lot).
I guess I may be peculiar, but for me the definitions in combination with good descriptive field names are what I’m used to reading when I read schema.
It requires a shift from reading English to reading code, and I have doubts that “translating” code to English would be practical if even possible. That said, it is often easy enough to explain a small piece of code to answer a specific question.