Drag and drop an image here, click to select, or paste from clipboard
Supports PNG, JPG, JPEG, GIF, WEBP (max 16MB)
Low detail processes images with 85 tokens. High detail provides better image understanding using the tiling approach described in OpenAI's documentation.
Results
Model
GPT-4o
Token Usage
0
Estimated Cost
$0.000000
Original Dimensions
0 × 0
Resized Dimensions
0 × 0
Tiles/Patches
0 × 0
How Token Usage Is Calculated
For This Image:
GPT-4o/GPT-4.1 Models: Images are first resized to fit within a 2048px x 2048px square. Then they're resized so the shortest side is 768px. The image is divided into 512×512 pixel tiles, with each tile costing 170 tokens. Each image also uses 85 base tokens, matching OpenAI's official calculation method.
Mini/Nano Models: Images are divided into 32×32 pixel patches. If the number of patches exceeds 1536, the image is automatically resized to fit within this limit. Token count equals the number of patches, exactly as described in OpenAI's documentation.
About Cost Estimates: The displayed costs are calculated based on OpenAI's published rate card: GPT-4o/4.1 ($10 per 1M tokens), Mini models ($5 per 1M tokens), and Nano models ($3 per 1M tokens). Actual costs may vary depending on your specific service agreement or API pricing tier.
Low Detail Level: As stated in OpenAI's documentation, setting "detail": "low" processes images with a fixed 85 tokens regardless of size. This is ideal for non-detail oriented tasks or when you want to minimize token usage and speed up responses.
Note: Token calculations are estimates based on available documentation and may vary slightly from actual usage.