Why there are lots of Disallow rule in robots.txt?

What if we …

Step 1. Allow crawling path /u/
Step 2. set noindex, follow header for path /u/
Step 3. Restrict profile access to logged-in users
image

If something is restricted to logged in user means bot cannot access it. Hence, it is already prevented from duplicate text concerns. Still, why disallow for path /u/?