CLI
Let's explore how to run readmeai
with various configurations and custom options. We'll start with the basic usage and then move on to more advanced options.
Basic Usage¶
The general syntax for using readme-ai is:
Replace <REPO_URL_OR_PATH>
with your repository URL or local path, and <LLM_SERVICE>
with your chosen LLM service (openai, ollama, gemini, or offline).
Examples with Different LLM Providers¶
Using OpenAI¶
readmeai --repository https://github.com/eli64s/readme-ai \
--api openai \
--model gpt-3.5-turbo # (1)
- Model currently defaults to
gpt-3.5-turbo
Using Ollama¶
Using Google Gemini¶
Offline Mode¶
Advanced Usage¶
You can customize the output using various options:
readmeai --repository https://github.com/eli64s/readme-ai \
--output readmeai.md \
--api openai \
--model gpt-4-turbo \
--badge-color A931EC \
--badge-style flat-square \
--header-style compact \
--toc-style fold \
--temperature 0.1 \
--tree-depth 2 \
--image LLM \
--emojis
For a full list of options, run:
See the Configuration documentation guide for detailed examples and explanations of each option.
Tips for Effective Usage¶
- Choose the right LLM: Different LLMs may produce varying results. Experiment to find the best fit for your project.
- Adjust temperature: Lower values (e.g., 0.1) produce more focused output, while higher values (e.g., 0.8) increase creativity.
- Use custom prompts: For specialized projects, consider using custom prompts to guide the AI's output.
- Review and edit: Always review the generated README and make necessary adjustments to ensure accuracy and relevance.
Troubleshooting¶
If you encounter any issues:
- Ensure you have the latest version of readme-ai installed.
- Check your API credentials if using OpenAI or Google Gemini.
- For Ollama, make sure the Ollama service is running locally.
- Consult the FAQ or open an issue for additional support.