Skip to content

Feedback: guides-task-endpoint-custom-summary

Original URL: https://www.assemblyai.com/docs/guides/task-endpoint-custom-summary
Category: guides
Generated: 05/08/2025, 4:36:19 pm


Generated: 05/08/2025, 4:36:18 pm

Technical Documentation Analysis: Custom Summary Using LeMUR

Section titled “Technical Documentation Analysis: Custom Summary Using LeMUR”

This documentation provides a functional walkthrough but lacks depth in several critical areas. While the basic implementation is clear, users may struggle with customization, error handling, and understanding the broader context of the feature.

  • Missing: Minimum Python version requirements
  • Missing: Complete list of dependencies
  • Missing: Account tier requirements beyond “upgrade with credit card”
  • Add: Clear system requirements section
# Add this comprehensive error handling example
try:
transcript = aai.Transcriber().transcribe(audio_url)
if transcript.status == aai.TranscriptStatus.error:
print(f"Transcription failed: {transcript.error}")
return
result = transcript.lemur.task(prompt, final_model=aai.LemurModel.claude3_5_sonnet)
except aai.AuthenticationError:
print("Invalid API key or insufficient permissions")
except aai.RateLimitError:
print("Rate limit exceeded. Please wait before retrying.")
except Exception as e:
print(f"Unexpected error: {e}")
# Document the complete response object
print(f"Summary: {result.response}")
print(f"Request ID: {result.request_id}")
print(f"Usage: {result.usage}") # Token usage information
# Generate Custom Summaries Using LeMUR
## Overview
Brief explanation of what LeMUR does and use cases
## Prerequisites
- Python 3.7+
- AssemblyAI account with LeMUR access
- API key from dashboard
## Quick Start
[Current quickstart code with error handling]
## Detailed Guide
[Step-by-step breakdown]
## Customization Options
[Prompt engineering, models, formats]
## Advanced Examples
[Multiple examples for different use cases]
## Troubleshooting
[Common issues and solutions]
## API Reference
[Complete parameter documentation]

Unclear Explanations Requiring Clarification

Section titled “Unclear Explanations Requiring Clarification”
# Add explanation of available models
available_models = {
aai.LemurModel.claude3_5_sonnet: "Best for complex analysis, higher cost",
aai.LemurModel.claude3_haiku: "Faster, cost-effective for simple tasks",
# Add other available models with use case guidance
}
### Prompt Best Practices
**Effective Prompt Structure:**
- Clear role definition
- Specific output requirements
- Format specifications
- Constraints and limitations
**Example Variations:**
```python
# For meeting summaries
meeting_prompt = """You are a meeting secretary. Create an executive summary focusing on:
- Key decisions made
- Action items assigned
- Next steps identified
Format as numbered sections."""
# For interview summaries
interview_prompt = """Summarize this interview highlighting:
- Main topics discussed
- Key insights shared
- Notable quotes
Keep under 200 words."""
# Add these diverse examples:
# 1. Customer service call summary
customer_service_prompt = """
Summarize this customer service interaction:
- Customer issue/concern
- Solution provided
- Customer satisfaction level
- Follow-up required (yes/no)
"""
# 2. Podcast episode summary
podcast_prompt = """
Create a podcast episode summary with:
- Main topic and guests
- Key discussion points (3-5 bullets)
- Notable quotes or insights
- Target audience takeaways
"""
# 3. Legal deposition summary
legal_prompt = """
Provide a legal deposition summary including:
- Parties involved
- Key testimony points
- Objections raised
- Critical admissions or denials
"""
format_examples = {
"Executive Summary": "3-paragraph format with overview, details, conclusion",
"Bullet Points": "Hierarchical bullet structure",
"Timeline": "Chronological event sequence",
"Q&A Format": "Question and answer pairs",
"Table": "Structured data in tabular format"
}
### Limitations and Constraints
- Maximum audio file size: [specify limit]
- Processing time estimates: [provide ranges]
- Transcript length limits for LeMUR: [specify]
- Rate limiting: [requests per minute/hour]
# Add cost estimation helper
def estimate_cost(audio_duration_minutes, transcript_length_words):
"""
Estimate processing costs for transcription + LeMUR summary
Args:
audio_duration_minutes: Length of audio file
transcript_length_words: Estimated word count
Returns:
dict: Cost breakdown
"""
transcription_cost = audio_duration_minutes * 0.00065 # Current rate
lemur_cost = (transcript_length_words / 1000) * 0.015 # Estimated
return {
"transcription": transcription_cost,
"lemur": lemur_cost,
"total": transcription_cost + lemur_cost
}
# Add async example for large files
import asyncio
async def process_large_file(audio_url):
"""Handle large files with async processing"""
config = aai.TranscriptionConfig(
speech_model=aai.SpeechModel.best # For better accuracy
)
transcriber = aai.Transcriber(config=config)
transcript = await transcriber.transcribe_async(audio_url)
# Poll for completion
while transcript.status == aai.TranscriptStatus.processing:
await asyncio.sleep(5)
transcript = await transcript.get_async()
return transcript
  • Link to live demo or playground
  • “Try it now” button with sample audio
  • Interactive prompt builder
  • Link to related LeMUR endpoints (Q&A, Topics)
  • Reference prompt engineering best practices
  • Connect to audio preprocessing guides

Provide production-ready templates with:

  • Environment variable management
  • Logging configuration
  • Configuration file usage
  • Batch processing examples

This documentation would significantly benefit from these improvements to provide a more comprehensive, user-friendly experience that anticipates and addresses real-world implementation challenges.