|
4 | 4 | "attachments": {}, |
5 | 5 | "cell_type": "markdown", |
6 | 6 | "metadata": {}, |
7 | | - "source": [ |
8 | | - "# Vectorizers\n", |
9 | | - "\n", |
10 | | - "In this notebook, we will show how to use RedisVL to create embeddings using the built-in text embedding vectorizers. Today RedisVL supports:\n", |
11 | | - "1. OpenAI\n", |
12 | | - "2. HuggingFace\n", |
13 | | - "3. Vertex AI\n", |
14 | | - "4. Cohere\n", |
15 | | - "5. Mistral AI\n", |
16 | | - "6. Amazon Bedrock\n", |
17 | | - "7. Bringing your own vectorizer\n", |
18 | | - "8. VoyageAI\n", |
19 | | - "\n", |
20 | | - "Before running this notebook, be sure to\n", |
21 | | - "1. Have installed ``redisvl`` and have that environment active for this notebook.\n", |
22 | | - "2. Have a running Redis Stack instance with RediSearch > 2.4 active.\n", |
23 | | - "\n", |
24 | | - "For example, you can run Redis Stack locally with Docker:\n", |
25 | | - "\n", |
26 | | - "```bash\n", |
27 | | - "docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latest\n", |
28 | | - "```\n", |
29 | | - "\n", |
30 | | - "This will run Redis on port 6379 and RedisInsight at http://localhost:8001." |
31 | | - ] |
| 7 | + "source": "# Vectorizers\n\nIn this notebook, we will show how to use RedisVL to create embeddings using the built-in text embedding vectorizers. Today RedisVL supports:\n1. OpenAI\n2. HuggingFace\n3. Vertex AI\n4. Cohere\n5. Mistral AI\n6. Amazon Bedrock\n7. Bringing your own vectorizer\n8. VoyageAI (text and multimodal)\n\nBefore running this notebook, be sure to\n1. Have installed ``redisvl`` and have that environment active for this notebook.\n2. Have a running Redis Stack instance with RediSearch > 2.4 active.\n\nFor example, you can run Redis Stack locally with Docker:\n\n```bash\ndocker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latest\n```\n\nThis will run Redis on port 6379 and RedisInsight at http://localhost:8001." |
32 | 8 | }, |
33 | 9 | { |
34 | 10 | "cell_type": "code", |
|
452 | 428 | "execution_count": null, |
453 | 429 | "outputs": [] |
454 | 430 | }, |
| 431 | + { |
| 432 | + "cell_type": "markdown", |
| 433 | + "source": "#### Multimodal Embeddings\n\nVoyageAI also offers multimodal embedding models that can embed text, images, and video together. The `VoyageAIMultimodalVectorizer` supports:\n- **Text strings** - Plain text descriptions\n- **PIL Images** - Image objects loaded with PIL/Pillow\n- **Image URLs** - URLs pointing to images\n- **Video** - Video files (voyage-multimodal-3.5 only, requires voyageai>=0.3.6)\n\nAvailable models:\n- `voyage-multimodal-3` - Text and images\n- `voyage-multimodal-3.5` - Text, images, and video", |
| 434 | + "metadata": {} |
| 435 | + }, |
| 436 | + { |
| 437 | + "cell_type": "code", |
| 438 | + "source": "from redisvl.utils.vectorize import VoyageAIMultimodalVectorizer\nfrom PIL import Image\n\n# Create a multimodal vectorizer\nmultimodal_vo = VoyageAIMultimodalVectorizer(\n model=\"voyage-multimodal-3\",\n api_config={\"api_key\": api_key},\n)\n\n# Embed text only\ntext_embedding = multimodal_vo.embed(\n content=[\"A description of a sunset over the ocean\"],\n input_type=\"document\"\n)\nprint(f\"Text-only embedding dimensions: {len(text_embedding)}\")\n\n# Embed text with an image (uncomment to test with your own image)\n# image = Image.open(\"your_image.jpg\")\n# multimodal_embedding = multimodal_vo.embed(\n# content=[\"A photo showing a beautiful sunset\", image],\n# input_type=\"document\"\n# )\n# print(f\"Multimodal embedding dimensions: {len(multimodal_embedding)}\")\n\n# Batch embedding multiple contents\ncontents = [\n [\"First text description\"],\n [\"Second text description\"],\n [\"Third text description\"],\n]\nembeddings = multimodal_vo.embed_many(contents, input_type=\"document\")\nprint(f\"Generated {len(embeddings)} embeddings\")", |
| 439 | + "metadata": {}, |
| 440 | + "execution_count": null, |
| 441 | + "outputs": [] |
| 442 | + }, |
| 443 | + { |
| 444 | + "cell_type": "markdown", |
| 445 | + "source": "#### Video Embeddings (voyage-multimodal-3.5)\n\nThe `voyage-multimodal-3.5` model supports video inputs in addition to text and images. Videos must be loaded using VoyageAI's `Video` utility class.\n\n**Requirements:**\n- Model: `voyage-multimodal-3.5` only\n- Package: `voyageai>=0.3.6`\n- Max video size: 20 MB", |
| 446 | + "metadata": {} |
| 447 | + }, |
| 448 | + { |
| 449 | + "cell_type": "code", |
| 450 | + "source": "# Video embedding example (uncomment to test with your own video)\n# from voyageai.video_utils import Video\n# \n# # Create vectorizer with video-capable model\n# video_vo = VoyageAIMultimodalVectorizer(\n# model=\"voyage-multimodal-3.5\",\n# api_config={\"api_key\": api_key},\n# )\n# \n# # Load video using VoyageAI's Video utility\n# video = Video.from_path(\"your_video.mp4\", model=\"voyage-multimodal-3.5\")\n# \n# # Embed video with text description\n# video_embedding = video_vo.embed(\n# content=[\"A video showing a cat playing with a toy\", video],\n# input_type=\"document\"\n# )\n# print(f\"Video embedding dimensions: {len(video_embedding)}\")", |
| 451 | + "metadata": {}, |
| 452 | + "execution_count": null, |
| 453 | + "outputs": [] |
| 454 | + }, |
455 | 455 | { |
456 | 456 | "cell_type": "markdown", |
457 | 457 | "metadata": {}, |
|
0 commit comments