OctoReport Docs
Back to HomeGo to Console
🚀快速开始
  • 产品概述
  • 快速上手
✨核心功能
    • 数据源总览
    • 搜索类源
    • RSS订阅源
    • 网页与邮件源
    • 政府与新闻源
  • 知识库管理
  • 报告生成
  • 交互式对话
  • 邮件触发
  • 积分与日志
💡使用技巧
  • 配置技巧
  • 优化与排查
🔬产品亮点
  • URL去重
  • 原子计费
  • 系统可靠性
❓帮助中心
  • FAQ与支持

Ask

What is Ask

Ask is OctoReport's intelligent Q&A feature, allowing you to interact with the system in natural language to quickly explore knowledge base content or have general conversations.

Core Features:

  • Two Modes: Library Mode + General Mode
  • Streaming Responses: Real-time feedback, character-by-character display, no waiting
  • Reasoning Process: Support Chain-of-Thought display
  • Keyboard Shortcuts: Cmd/Ctrl+K to quickly start new conversation

PlaceholderAsk interface screenshot - showing two modes and Ask UI


Library Mode

What is Library Mode

Library Mode is based on RAG (Retrieval-Augmented Generation) technology, allowing AI to answer questions based on your knowledge base content, ensuring accurate and verifiable answers.

How it Works:

Your question
  ↓
Keyword extraction (LLM)
  ↓
Retrieve relevant content (Top 10)
  ↓
Build context
  ↓
Generate answer (with citations)
  ↓
Stream output

How to Use

Step 1: Select Library

  1. Click "Library Mode" at the top
  2. Select target library from dropdown (e.g., "AI Industry News")
  3. Library must contain content to use

Step 2: Ask Questions

Enter your question, for example:

  • "What important AI news has there been recently?"
  • "What were the tech highlights this past week?"
  • "Summarize OpenAI's latest developments"

Step 3: View Answer

AI will:

  1. Retrieve relevant content from library (up to 10 items)
  2. Generate answer based on content
  3. Cite original sources (title + URL)
  4. Stream output character by character (real-time feedback)

Tip: Library Mode answers are based on collected content. If library is empty or content irrelevant, AI cannot provide effective answers.

Retrieval Mechanism

Keyword Extraction:

  • AI automatically extracts 3-5 keywords from your question
  • Example: "Recent AI large model progress" →
    ["AI", "large model", "progress"]

Content Matching:

  • Search library for content containing keywords
  • Priority: Title match > Full-text match
  • Return most relevant Top 10 items

Context Building:

[object Object],
  ,[object Object],
    ,[object Object],[object Object], ,[object Object],[object Object],
    ,[object Object],[object Object], ,[object Object],[object Object],
    ,[object Object],[object Object], ,[object Object],[object Object],
    ,[object Object],[object Object], ,[object Object],
  ,[object Object],
,[object Object],
hljs json

Generate Answer:

  • LLM generates answer based on context
  • Auto-cite sources (showing titles and links)

Best Practices

✅ Specific questions get better answers

Good examples:

  • "What major AI models were released this week?"
  • "What are the key features of GPT-4?"
  • "Which companies are hiring AI engineers?"

Poor examples:

  • "What's new?" (Too vague)
  • "Tell me everything" (Too broad)

✅ Ensure library has relevant content

  • Library Mode depends on collected content
  • If empty or irrelevant, consider:
    1. Adding more data sources
    2. Adjusting collection frequency
    3. Switching to General Mode

✅ Use follow-up questions

Ask supports context memory:

  • First question: "What important AI news is there today?"
  • Follow-up: "Which company is most noteworthy?" (AI understands you're asking about the news from previous answer)

General Mode

What is General Mode

General Mode is universal conversation mode, not limited to knowledge base content, suitable for:

  • General questions (e.g., "How to write Python code?")
  • Brainstorming (e.g., "How to design a report template?")
  • Explanations (e.g., "What is RAG?")

Difference from Library Mode:

  • Library Mode: Answer based on your knowledge base
  • General Mode: Answer based on LLM's built-in knowledge

How to Use

  1. Click "General Mode" at top
  2. Enter any question
  3. AI will answer based on built-in knowledge

Use Cases:

  • Technical consultation: "How to optimize LLM prompt?"
  • Knowledge explanation: "What is the difference between GPT-4 and Claude?"
  • Creative brainstorming: "Design a competitive intelligence report structure"

Advanced Features

Web Search (General Mode)

When enabling Web Search:

  • AI will search web for latest info
  • Cite search result sources
  • Cost: 10 credits/search

How to Enable:

  1. Switch to General Mode
  2. Toggle "Enable Web Search" switch
  3. Ask question (e.g., "What did OpenAI announce today?")

When to Use:

  • Need latest information (recent days/today)
  • Knowledge base has no relevant content
  • Need to verify facts

Chain-of-Thought Display

Some models (like o1, DeepSeek) support displaying reasoning process:

  • Show AI's thinking steps
  • Help understand answer logic
  • Useful for learning and debugging

Cost & Performance

Library Mode Cost

Per Conversation:

  • Keyword extraction: ~1 credit
  • Content retrieval: Free (database query)
  • Answer generation: 5-20 credits (depends on model)

Total: ~5-20 credits/conversation

General Mode Cost

Per Conversation:

  • Answer generation: 5-50 credits (depends on model and length)

With Web Search: +10 credits

Performance Optimization

Reduce Cost:

  • Library Mode: Use fast models (e.g., GPT-4o-mini)
  • Limit conversation length (shorter prompts)
  • Disable Web Search when not needed

Improve Quality:

  • Use premium models (e.g., GPT-4o, Claude Sonnet)
  • Ensure library has enough relevant content
  • Ask specific questions

Troubleshooting

Problem: Library Mode returns "No relevant content"

Possible Causes:

  1. Library is empty
  2. Keywords don't match content
  3. Content all marked as expired

Solution:

  • Check library content count
  • Try different phrasing
  • Check data source collection status

Problem: Answer quality insufficient

Possible Causes:

  1. Retrieved content not relevant
  2. Question too vague
  3. Model choice inappropriate

Solution:

  • Ask more specific questions
  • Increase data source collection frequency
  • Switch to premium model

Problem: Stream response interrupted

Possible Causes:

  1. Network unstable
  2. Backend service timeout
  3. Browser compatibility issue

Solution:

  • Refresh and retry
  • Check network connection
  • Contact admin to check backend service status

Next Steps

  • Report Generation - Auto-generate structured reports on schedule
  • Trigger Inbox - Trigger reports via natural language email
  • Credits & Logs - View consumption details and task logs