They should follow the same format you expect your outputs to be. If you want short replies in subsequent interactions keep examples short, and vice versa.
No. This is simply appending rounds of conversation that will be prepended to every interaction with that persona.
Depends on the model. Some models, like Gemma, treat the system prompt and the user messages with basically the same weight. While other models give greater weight to the system prompt.
The idea is that your conversations with this persona will be like
system: system prompt goes here
user: example 1
assistant: example 1
user: example 2
assistant: example 2
user: your actual message goes here
assistant:
Where examples work great to help ground the style of assistant messages.