Advanced Usage

Advanced Usage

Streaming Responses

Enable real-time streaming for both text generation and voice synthesis:

# Streaming text generation
from secret_ai_sdk.secret_ai import ChatSecret

messages = [("human", "Write a story about AI")]

for chunk in secret_ai_llm.stream(messages):
    print(chunk.content, end="", flush=True)

Custom Streaming Handler

Implement custom streaming behavior with callback handlers:

from langchain.callbacks.base import BaseCallbackHandler
import re

class SecretStreamingHandler(BaseCallbackHandler):
    def __init__(self, width: int = 60):
        self.width = width
        self.buffer = ""
        self.in_think = False
    
    def on_llm_new_token(self, token: str, **kwargs) -> None:
        """Handle new token from LLM stream"""
        if "<think>" in token:
            self.in_think = True
            print("\n🧠 ", end="", flush=True)
        elif "</think>" in token:
            self.in_think = False
            print("\n", flush=True)
        else:
            if self.in_think:
                # Color thinking text in cyan
                print(f"\033[96m{token}\033[0m", end="", flush=True)
            else:
                print(token, end="", flush=True)

# Use with streaming
handler = SecretStreamingHandler(width=80)
response = secret_ai_llm.invoke(
    messages, 
    config={"callbacks": [handler]}, 
    stream=True
)

Enhanced Error Handling

The SDK provides comprehensive error handling with specific exception types:

Enhanced Client with Custom Configuration

Use the enhanced client for advanced retry and timeout configuration:

Voice Processing Examples

Context Manager Usage

Streaming TTS

Last updated

Was this helpful?