API Reference
Complete reference for all VideoIntel.js classes, methods, and types.
VideoIntel Class
The main class for video analysis. Import the default export to get a singleton instance:
import videoIntel from 'videointel';
// Use the singleton instance
await videoIntel.init();
const results = await videoIntel.analyze(file);init(config?)
Initialize VideoIntel with optional configuration. This method is optional - the library will auto-initialize on first use if not called explicitly.
interface VideoIntelConfig {
workers?: number; // Number of web workers (default: CPU cores)
models?: string[]; // Models to preload (future feature)
}
await videoIntel.init({
workers: 4,
});Parameters
config(optional) - Configuration object
Returns
Promise<void>
analyze(videoInput, options?)
Analyze a video with multiple features in a single call. Most efficient method when you need multiple analysis results.
interface AnalysisOptions {
thumbnails?: {
enabled?: boolean; // Default: true
count?: number; // Default: 5
quality?: number; // Default: 0.8 (0-1)
width?: number; // Optional resize width
height?: number; // Optional resize height
};
scenes?: {
enabled?: boolean; // Default: true
threshold?: number; // Default: 30 (0-100)
minDuration?: number; // Default: 1 (seconds)
};
colors?: {
enabled?: boolean; // Default: true
count?: number; // Default: 5
quality?: number; // Default: 10 (1-10)
};
metadata?: boolean; // Default: true
}
const results = await videoIntel.analyze(file, {
thumbnails: { count: 10, quality: 0.9 },
scenes: { threshold: 25 },
colors: { count: 8 },
metadata: true,
});Parameters
videoInput- Video file (File, Blob, or HTMLVideoElement)options(optional) - Analysis options
Returns
interface AnalysisResult {
thumbnails?: Thumbnail[];
scenes?: Scene[];
colors?: Color[];
metadata?: VideoMetadata;
performance?: {
totalTime: number;
thumbnailTime?: number;
sceneTime?: number;
colorTime?: number;
metadataTime?: number;
};
}getThumbnails(videoInput, options?)
Generate smart thumbnails from key moments in the video.
interface ThumbnailOptions {
count?: number; // Number of thumbnails (default: 5)
quality?: number; // Image quality 0-1 (default: 0.8)
width?: number; // Resize width (optional)
height?: number; // Resize height (optional)
format?: 'jpeg' | 'png'; // Image format (default: 'jpeg')
}
const thumbnails = await videoIntel.getThumbnails(file, {
count: 10,
quality: 0.9,
width: 1280,
height: 720,
});Returns
interface Thumbnail {
dataUrl: string; // Base64 encoded image
timestamp: number; // Time in video (seconds)
quality: number; // Quality score 0-1
width: number; // Image width
height: number; // Image height
}
// Example usage
thumbnails.forEach(thumb => {
const img = document.createElement('img');
img.src = thumb.dataUrl;
img.alt = `Thumbnail at ${thumb.timestamp}s`;
document.body.appendChild(img);
});detectScenes(videoInput, options?)
Automatically detect scene changes in the video.
interface SceneOptions {
threshold?: number; // Sensitivity 0-100 (default: 30)
minDuration?: number; // Min scene duration in seconds (default: 1)
maxScenes?: number; // Max number of scenes (optional)
}
const scenes = await videoIntel.detectScenes(file, {
threshold: 25,
minDuration: 2,
});Returns
interface Scene {
timestamp: number; // Scene start time (seconds)
score: number; // Detection confidence 0-100
duration?: number; // Scene duration (if known)
}
// Example: Create chapter markers
scenes.forEach((scene, i) => {
console.log(`Chapter ${i + 1}: ${scene.timestamp}s`);
});extractColors(videoInput, options?)
Extract dominant colors from the video using advanced color quantization.
interface ColorOptions {
count?: number; // Number of colors (default: 5)
quality?: number; // Sampling quality 1-10 (default: 10)
sampleSize?: number; // Number of frames to sample (optional)
}
const colors = await videoIntel.extractColors(file, {
count: 8,
quality: 10,
});Returns
interface Color {
hex: string; // Hex color code
rgb: [number, number, number]; // RGB values (0-255)
percentage: number; // Usage percentage (0-100)
pixels?: number; // Pixel count (optional)
}
// Example: Create color palette
colors.forEach(color => {
const div = document.createElement('div');
div.style.backgroundColor = color.hex;
div.style.width = `${color.percentage}%`;
div.title = `${color.hex} - ${color.percentage}%`;
document.body.appendChild(div);
});getMetadata(videoInput)
Extract technical metadata from the video file.
const metadata = await videoIntel.getMetadata(file);
console.log(`Duration: ${metadata.duration}s`);
console.log(`Resolution: ${metadata.width}x${metadata.height}`);
console.log(`Frame Rate: ${metadata.frameRate} fps`);Returns
interface VideoMetadata {
duration: number; // Video duration in seconds
width: number; // Video width in pixels
height: number; // Video height in pixels
frameRate: number; // Frames per second
size: number; // File size in bytes
format?: string; // Video format/codec
bitrate?: number; // Bitrate in bps
hasAudio?: boolean; // Whether video has audio track
}dispose()
Clean up resources and terminate worker threads. Call this when you're done using VideoIntel to free memory.
// When finished with video analysis
await videoIntel.dispose();
// Re-initialize if you need to use it again
await videoIntel.init();Returns
Promise<void>
Type Definitions
VideoIntel.js exports all TypeScript types for use in your projects:
import type {
// Main types
VideoIntelConfig,
AnalysisOptions,
AnalysisResult,
// Feature-specific types
ThumbnailOptions,
Thumbnail,
SceneOptions,
Scene,
ColorOptions,
Color,
VideoMetadata,
// Input types
VideoInput,
} from 'videointel';
// Use in your code for type safety
const config: VideoIntelConfig = { workers: 4 };
const options: AnalysisOptions = { thumbnails: { count: 10 } };💡 Pro Tip
Use the analyze() method when you need multiple features - it's more efficient than calling each method separately as it only processes the video once.
📚 Next Steps
- • Read the Guides for detailed tutorials
- • Check out Framework Integration Examples
- • Try the Interactive Playground