import openai
import requests
from bs4 import BeautifulSoup
openai.api_key = "KEY"
url = "https://meta.discourse.org/t/summarising-topics-with-an-llm-gpt-bert/254951.json"
response = requests.get(url)
data = response.json()
messages = []
messages.append("Title: " + data["title"])
for post in data['post_stream']['posts']:
soup = BeautifulSoup(post['cooked'], 'html.parser')
messages.append("Post #" + str(post["post_number"]) + " by " + post["username"])
messages.append(soup.get_text())
text_blob = "\n".join(messages)
print(text_blob)
max_chunk_len = 4000 # 各チャンクの最大長
# 各チャンクの最大長
max_chunk_len = 4000 # Maximum length of each chunk
chunks = [text_blob[i:i+max_chunk_len] for i in range(0, len(text_blob), max_chunk_len)]
summaries = []
for chunk in chunks:
print("processing chunk")
response = openai.Completion.create(
engine="text-davinci-003",
prompt="prompt:\n" + chunk + "\nsummary:",
temperature=0.7,
max_tokens=200,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
summary = response.choices[0].text.strip()
print("\nSUMMARY")
print(summary)
print("\nSUMMARY")
Discourseは、GPT-3やBERTのようなLLMを使用して、オンデマンドで長いスレッドを要約する可能性を検討しています。この機能はopenaiキーで設定され、特にプライベートコミュニティに役立つ可能性があります。要約はスレッドに挿入され、上部にリンクとフィードバックメカニズムが配置されます。AIチームは現在この可能性を検討しており、何かお見せできるものがあれば更新します。
Discourse is exploring the possibility of using an LLM such as GPT-3 or BERT to summarize long threads on demand. This feature would be configured with an openai key and could be especially useful for private communities. The summarization would be inserted in the thread, with a link at the top and a feedback mechanism. The AI team is currently exploring this possibility and will update when they have something to show.
興味深いですが、かなりムラがあります
It’s interesting but reasonably uneven