Ask
What is Ask
Ask is OctoReport's intelligent Q&A feature, allowing you to interact with the system in natural language to quickly explore knowledge base content or have general conversations.
Core Features:
- Two Modes: Library Mode + General Mode
- Streaming Responses: Real-time feedback, character-by-character display, no waiting
- Reasoning Process: Support Chain-of-Thought display
- Keyboard Shortcuts: Cmd/Ctrl+K to quickly start new conversation
PlaceholderAsk interface screenshot - showing two modes and Ask UI
Library Mode
What is Library Mode
Library Mode is based on RAG (Retrieval-Augmented Generation) technology, allowing AI to answer questions based on your knowledge base content, ensuring accurate and verifiable answers.
How it Works:
Your question ↓ Keyword extraction (LLM) ↓ Retrieve relevant content (Top 10) ↓ Build context ↓ Generate answer (with citations) ↓ Stream output
How to Use
Step 1: Select Library
- Click "Library Mode" at the top
- Select target library from dropdown (e.g., "AI Industry News")
- Library must contain content to use
Step 2: Ask Questions
Enter your question, for example:
- "What important AI news has there been recently?"
- "What were the tech highlights this past week?"
- "Summarize OpenAI's latest developments"
Step 3: View Answer
AI will:
- Retrieve relevant content from library (up to 10 items)
- Generate answer based on content
- Cite original sources (title + URL)
- Stream output character by character (real-time feedback)
Tip: Library Mode answers are based on collected content. If library is empty or content irrelevant, AI cannot provide effective answers.
Retrieval Mechanism
Keyword Extraction:
- AI automatically extracts 3-5 keywords from your question
- Example: "Recent AI large model progress" →
["AI", "large model", "progress"]
Content Matching:
- Search library for content containing keywords
- Priority: Title match > Full-text match
- Return most relevant Top 10 items
Context Building:
[object Object], ,[object Object], ,[object Object],[object Object], ,[object Object],[object Object], ,[object Object],[object Object], ,[object Object],[object Object], ,[object Object],[object Object], ,[object Object],[object Object], ,[object Object],[object Object], ,[object Object], ,[object Object], ,[object Object],hljs json
Generate Answer:
- LLM generates answer based on context
- Auto-cite sources (showing titles and links)
Best Practices
✅ Specific questions get better answers
Good examples:
- "What major AI models were released this week?"
- "What are the key features of GPT-4?"
- "Which companies are hiring AI engineers?"
Poor examples:
- "What's new?" (Too vague)
- "Tell me everything" (Too broad)
✅ Ensure library has relevant content
- Library Mode depends on collected content
- If empty or irrelevant, consider:
- Adding more data sources
- Adjusting collection frequency
- Switching to General Mode
✅ Use follow-up questions
Ask supports context memory:
- First question: "What important AI news is there today?"
- Follow-up: "Which company is most noteworthy?" (AI understands you're asking about the news from previous answer)
General Mode
What is General Mode
General Mode is universal conversation mode, not limited to knowledge base content, suitable for:
- General questions (e.g., "How to write Python code?")
- Brainstorming (e.g., "How to design a report template?")
- Explanations (e.g., "What is RAG?")
Difference from Library Mode:
- Library Mode: Answer based on your knowledge base
- General Mode: Answer based on LLM's built-in knowledge
How to Use
- Click "General Mode" at top
- Enter any question
- AI will answer based on built-in knowledge
Use Cases:
- Technical consultation: "How to optimize LLM prompt?"
- Knowledge explanation: "What is the difference between GPT-4 and Claude?"
- Creative brainstorming: "Design a competitive intelligence report structure"
Advanced Features
Web Search (General Mode)
When enabling Web Search:
- AI will search web for latest info
- Cite search result sources
- Cost: 10 credits/search
How to Enable:
- Switch to General Mode
- Toggle "Enable Web Search" switch
- Ask question (e.g., "What did OpenAI announce today?")
When to Use:
- Need latest information (recent days/today)
- Knowledge base has no relevant content
- Need to verify facts
Chain-of-Thought Display
Some models (like o1, DeepSeek) support displaying reasoning process:
- Show AI's thinking steps
- Help understand answer logic
- Useful for learning and debugging
Cost & Performance
Library Mode Cost
Per Conversation:
- Keyword extraction: ~1 credit
- Content retrieval: Free (database query)
- Answer generation: 5-20 credits (depends on model)
Total: ~5-20 credits/conversation
General Mode Cost
Per Conversation:
- Answer generation: 5-50 credits (depends on model and length)
With Web Search: +10 credits
Performance Optimization
Reduce Cost:
- Library Mode: Use fast models (e.g., GPT-4o-mini)
- Limit conversation length (shorter prompts)
- Disable Web Search when not needed
Improve Quality:
- Use premium models (e.g., GPT-4o, Claude Sonnet)
- Ensure library has enough relevant content
- Ask specific questions
Troubleshooting
Problem: Library Mode returns "No relevant content"
Possible Causes:
- Library is empty
- Keywords don't match content
- Content all marked as expired
Solution:
- Check library content count
- Try different phrasing
- Check data source collection status
Problem: Answer quality insufficient
Possible Causes:
- Retrieved content not relevant
- Question too vague
- Model choice inappropriate
Solution:
- Ask more specific questions
- Increase data source collection frequency
- Switch to premium model
Problem: Stream response interrupted
Possible Causes:
- Network unstable
- Backend service timeout
- Browser compatibility issue
Solution:
- Refresh and retry
- Check network connection
- Contact admin to check backend service status
Next Steps
- Report Generation - Auto-generate structured reports on schedule
- Trigger Inbox - Trigger reports via natural language email
- Credits & Logs - View consumption details and task logs