Skip to content

Feedback: lemur-examples

Original URL: https://www.assemblyai.com/docs/lemur/examples
Category: lemur
Generated: 05/08/2025, 4:29:02 pm


Generated: 05/08/2025, 4:29:01 pm

Technical Documentation Analysis & Feedback

Section titled “Technical Documentation Analysis & Feedback”

This documentation provides a solid foundation for LeMUR custom prompts but has several areas for improvement in clarity, structure, and user experience.


1. Missing Prerequisites & Setup Information

Section titled “1. Missing Prerequisites & Setup Information”
  • Issue: No clear explanation of what LeMUR is or how it works
  • Fix: Add a brief “What is LeMUR?” section explaining it’s an LLM layer for audio analysis
  • Add: Link to pricing information (only mentions “credit card required”)
  • Add: API rate limits and usage quotas
  • Issue: Code examples lack comprehensive error handling
  • Fix: Add error handling for common scenarios:
    # Add to all examples
    try:
    result = transcript.lemur.task(prompt)
    except aai.LemurError as e:
    print(f"LeMUR processing failed: {e}")
    except aai.TranscriptError as e:
    print(f"Transcript error: {e}")
  • Issue: Users don’t know what to expect in the response
  • Fix: Add a “Response Format” section with example output:
    {
    "request_id": "example-id",
    "response": "The emotional sentiment was predominantly positive...",
    "usage": {
    "input_tokens": 150,
    "output_tokens": 45
    }
    }

  • Current flow: Example → Use cases → Ideas table
  • Improved flow:
    1. What is LeMUR & Prerequisites
    2. Basic Example with detailed explanation
    3. Response format & error handling
    4. Advanced examples
    5. Use case gallery
    6. Best practices
  • Issue: Some languages have more complete examples than others
  • Fix: Standardize all code examples to include:
    • Error handling
    • Comments explaining each step
    • Expected output examples

  • Missing: Explanation of final_model parameter options
  • Add: Table of available models with descriptions:
    | Model | Best For | Token Limit |
    |-------|----------|-------------|
    | claude-sonnet-4 | Complex analysis, long content | 200k |
    | gpt-4 | General purpose | 128k |
  • Issue: “What was the emotional sentiment?” is too basic
  • Fix: Provide more sophisticated examples:
    # Better examples
    prompt = """Analyze the emotional sentiment of this phone call and provide:
    1. Overall sentiment score (1-10)
    2. Key emotional moments with timestamps
    3. Sentiment progression throughout the call
    4. Specific phrases that indicate sentiment"""

8. Missing Context About Audio Requirements

Section titled “8. Missing Context About Audio Requirements”
  • Add: Supported audio formats, file size limits, duration limits
  • Add: Best practices for audio quality

  • Add: “Beginner → Intermediate → Advanced” examples
  • Add: Common pitfalls and troubleshooting section
  • Add: “Try it yourself” section with a working example
  • Add: Expected execution time estimates
  • Current: Just links to other pages
  • Improved: Include 2-3 complete examples with actual prompts and outputs

  1. Prerequisites & Setup

    ## Before You Begin
    - AssemblyAI account with API key
    - Audio file (supported formats: mp3, wav, m4a, etc.)
    - Credit card on file (LeMUR uses token-based pricing)
    - Estimated processing time: 30-60 seconds per minute of audio
  2. Best Practices

    ## Prompt Engineering Tips
    - Be specific about desired output format
    - Include context about the audio content
    - Use structured prompts for complex analysis
    - Test with shorter audio files first
  3. Troubleshooting

    ## Common Issues
    - **Long processing times**: Large files may take several minutes
    - **Token limits**: Break long audio into segments
    - **Generic responses**: Make prompts more specific
  • Add timing information
  • Show progress indicators for long operations
  • Include validation steps
  • Add debug output options

Replace the simple table with categorized, detailed examples including expected output formats and use case scenarios.


  1. High Priority: Add response format documentation and error handling
  2. Medium Priority: Restructure information hierarchy and add prerequisites
  3. Low Priority: Enhanced examples and interactive elements

This documentation has good bones but needs significant enhancement to reduce user friction and improve success rates.