-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Utilize LLM to Review Documentation #328
Comments
@Hawazyn I'm not sure what you mean by this, but this is a research project, so it's open to any type of improvements: Please feel free to either document here in more detail what you're proposing or directly jump to creating a PR such as for the community to comment on the result. |
@baentsch Thank you for your response. While reviewing the documentation, I noticed there are areas that could benefit from refinement, and I think leveraging LLM could help us identify and address these. My main concern is about the resources and implementation—I’m not sure how to utilize models within GitHub Actions or if we should run them locally. This is actually why I opened the issue: to discuss the best approach for integrating LLMs into our workflow. Any guidance or suggestions would be greatly appreciated. |
Well, before adding something to a workflow (as in CI, you mean, right?) there needs to be an idea how to do that in general (and why). As I am not an expert in AI, but you brought up the suggestion, I'd assume you have an idea how to apply an LLM here, no? What problem do you want to solve, for example? |
@baentsch I was thinking about an approach and found that leveraging an LLM (e.g., OpenAI, Claude) could improve our documentation. It can help enhance quality through automated reviews and ensure clarity and consistency. The plan is straightforward: we can write a Python script that utilizes the LLM API to analyze the documentation. This script can then be integrated into a GitHub Action to run during CI, generating reports that highlight areas for improvement. Let me know if this approach sounds good, and I can start drafting the solution once I complete my current draft PR. |
I'll admit that I am somewhat of a skeptic when it comes to applying LLMs to content of a technical nature. What specifically would you hope to get from such a report, @Hawazyn? |
@SWilson4 We could define a structured output that organizes issues under specific sections. For example, there could be a section for grammar mistakes listing each instance, another for unclear language with the specific text, and one for pronunciation errors, and so on. This makes it easy to identify and address issues systematically. What do you think? |
I understand the team’s focus isn’t on AI, but with the lack of contributors, I’d like to lighten the load and take some work off the team’s shoulders. Don’t worry, I already have an AI team ready to handle this independently, and I believe improving documentation quality will ultimately support the team’s goals. |
Thanks! I shall be curious as to what the results will be. |
Could we explore using LLM to review and enhance the documentation?
The text was updated successfully, but these errors were encountered: