Beever Atlas v0.1 has launched! Star us on GitHub
Beever AtlasBeever Atlas

Architecture Tour

This guide walks you through the Beever Atlas codebase structure, explaining the purpose of each module and how they work together.

Project Structure

docker-compose.yml

Core Modules

adapters/

Platform adapters provide a unified interface for fetching messages from different platforms.

__init__.py
base.py
mock.py
bridge.py
file_adapter.py

Key Types:

  • BaseAdapter: Abstract interface for all platforms
  • NormalizedMessage: Platform-agnostic message representation
  • ChannelInfo: Platform-agnostic channel metadata

Responsibilities:

  • Fetch message history from platforms
  • Normalize platform-specific data formats
  • Handle platform-specific quirks (pagination, rate limits)
  • Provide thread and channel metadata

agents/

Google ADK (Agent Development Kit) agents for specialized tasks.

__init__.py
file_extractor.py
message_extractor.py
query_router.py
wiki_builder.py
wiki_compiler.py

Responsibilities:

  • Extract structured data from unstructured content
  • Route queries to appropriate retrieval strategies
  • Generate wiki documentation from conversations
  • Handle citation and follow-up generation

api/

FastAPI route handlers organized by feature.

__init__.py
mcp.py
channels.py
connections.py
sync.py
ask.py
search.py
connections.py

Responsibilities:

  • Expose HTTP endpoints for all features
  • Validate request data with Pydantic
  • Handle streaming responses (SSE)
  • Implement authentication and authorization

infra/

Infrastructure services that support the application.

__init__.py
config.py
health.py
logging.py
crypto.py

Responsibilities:

  • Load and validate environment configuration
  • Provide health check for all dependencies
  • Configure structured logging
  • Encrypt sensitive credentials

llm/

LLM provider abstraction for multiple AI services.

__init__.py
provider.py
litellm_provider.py
model_resolver.py
schemas.py

Supported Providers:

  • Anthropic Claude
  • OpenAI GPT models
  • Google Gemini
  • Any provider supported by LiteLLM

Responsibilities:

  • Provide unified interface for LLM calls
  • Handle model name mapping
  • Implement retry logic and error handling
  • Support streaming responses

models/

Pydantic models for type safety and validation.

__init__.py
platform_connection.py
channel.py
message.py
sync_job.py
llm_request.py

Responsibilities:

  • Define data schemas for the entire application
  • Provide type hints and validation
  • Serialize/deserialize for storage
  • Generate OpenAPI documentation

retrieval/

Hybrid semantic + graph search implementation.

__init__.py
hybrid_search.py
vector_search.py
graph_search.py
query_optimizer.py

Responsibilities:

  • Implement hybrid search combining vector and graph
  • Execute vector similarity queries
  • Execute graph traversal queries
  • Rank and merge results

services/

Business logic layer that coordinates between modules.

__init__.py
chat_history.py
file_processor.py
sync_runner.py
query_service.py
citation_service.py
platform_store.py

Responsibilities:

  • Implement core business logic
  • Coordinate between adapters and stores
  • Handle complex multi-step operations
  • Provide transactional boundaries

stores/

Data store clients for persistence layers.

__init__.py
weaviate.py
neo4j.py
mongodb.py
redis.py
platform.py

Responsibilities:

  • Provide low-level database operations
  • Handle connection pooling
  • Implement caching strategies
  • Manage database migrations

wiki/

Wiki generation engine for creating documentation.

__init__.py
builder.py
compiler.py
cache.py
templates.py

Responsibilities:

  • Generate structured wiki content from conversations
  • Compile multi-language wikis
  • Cache generated content
  • Support multiple output formats

Bot Service

The bot/ directory contains the TypeScript bot service.

index.ts
chat-manager.ts
formatter.ts
sse-client.ts
webhook-buffer.ts
bridge.ts
slack-mrkdwn.ts
package.json
tsconfig.json

Responsibilities:

  • Handle platform webhooks (Slack, Discord, Teams)
  • Format responses for each platform
  • Manage Chat SDK connections
  • Provide bridge API for Python backend

Data Flow

Message Sync Flow

Platform Adapter

Normalized Message

File Processor (if attachments)

Vector Store (Weaviate) + Graph Store (Neo4j)

Document Store (MongoDB) + Cache (Redis)

Query Flow

API Request

Query Service

Retrieval (Hybrid Search)

Vector Search + Graph Search

LLM Provider

Response + Citations

API Response

Key Patterns

Adapter Pattern

Platform adapters implement BaseAdapter for consistent interface:

class BaseAdapter(abc.ABC):
    @abc.abstractmethod
    async def fetch_history(self, channel_id: str) -> list[NormalizedMessage]:
        pass

Service Layer

Business logic isolated in services, not route handlers:

# In route handler
result = await query_service.ask(channel_id, question)

# In service
async def ask(self, channel_id: str, question: str):
    # Complex business logic
    pass

Repository Pattern

Data access abstracted through store clients:

# Use store interface
messages = await stores.mongodb.get_messages(channel_id)

# Implementation details hidden
class MongoDBStore:
    async def get_messages(self, channel_id: str):
        # MongoDB-specific code
        pass

Dependency Injection

Stores and services use dependency injection:

# In conftest.py
@pytest.fixture
def mock_stores():
    fake = MagicMock()
    stores._stores = fake
    yield fake

Configuration

Configuration loaded from environment with validation:

# in infra/config.py
class Settings(BaseSettings):
    database_url: str
    api_key: str | None = None
    
    class Config:
        env_file = ".env"

Testing Strategy

conftest.py
test_adapters.py
test_services.py

Testing Principles:

  • Mock external services (Slack, Discord, LLMs)
  • Use MockAdapter for adapter tests
  • Test stores with test databases
  • Integration tests use real infrastructure

Next Steps

Now that you understand the architecture:

Ready to contribute? Check the Issues for open tasks.

How is this guide?

On this page

Ready for production?

Ship to production with SSO, audit logs, spend controls, and guardrails your security team will approve.

Talk to the team

or email hello@beever.ai