diff --git a/bundles/custom_formats.json b/bundles/custom_formats.json
index e1611d4..e16c575 100644
--- a/bundles/custom_formats.json
+++ b/bundles/custom_formats.json
@@ -4937,7 +4937,7 @@
{
"name": "MA Regex",
"negate": false,
- "pattern": "(?
- It rewards efficient encodes regardless of codec choice
- It catches inefficient HEVC encodes that waste space
- It avoids the complexity of parsing inconsistent HEVC labeling (h265/x265)
- It future-proofs the system for newer codecs like AV1, where we can simply adjust our codec ranking priorities (AV1 > HEVC > AVC) while still maintaining the core efficiency metric
Think of it this way: users don't actually care what codec is used - they care about getting high quality video at reasonable file sizes. Our metric measures this directly instead of using codec choice as an unreliable proxy. |\n| But doesn't this ignore quality? | The current encoding landscape places tremendous emphasis on maximizing absolute quality, often treating file size as a secondary concern. This metric aims to challenge that, or at least find a middle ground - we care about quality (hence why we use proper sources as our baseline and consider VMAF scores), but we acknowledge that most users only care about getting file sizes they actually want, and not the marginal quality improvements you get from encoding from a remux, compared to a web-dl. Rather than taking either extreme position - \"quality above all\" or \"smaller is always better\" - we focus on _efficiency_: getting the best practical quality for any given file size target. This approach **will not** satisfy quality enthusiasts, but it better serves the needs of most users. |\n| What if the source is not a 1080p remux? How do you tell? | This metric, like any data-driven system, will never achieve 100% accuracy. However, we can parse various indicators beyond just the release group or streaming service to identify non-remux sources. For example, we can identify when a non-DS4K WEB-DL or non-webrip from a reputable group is likely sourced from another lossy encode rather than a remux. We also maintain a manual tagging system to downrank certain release groups known for reencoding from non-high-quality sources. Groups like PSA and MeGusta will be ranked lower in the system, regardless of their efficiency scores, due to their known practices. |\n| How do you prefer HEVC? | We actually approach this from the opposite direction - instead of preferring HEVC, we downrank AVC. This is because HEVC naming conventions are inconsistent (groups use x265 and h265 interchangeably), making them difficult to parse reliably. In contrast, AVC is almost always labeled consistently as either x264 or h264, making it much easier to identify and downrank these releases. |\n| Why not consider releases above 40% efficiency? | For standard 1080p non-HDR content, above 40% compression ratio, x264 and x265 perform nearly identically in terms of VMAF scores, eliminating HEVC's key advantages. At this point, x264 becomes the preferred choice across all metrics - the encodes are easier to produce, far more common, and typically undergo more rigorous quality control. There's simply no compelling reason to use HEVC at these higher bitrates for standard 1080p content. |\n| What about animated content? | Animated content typically has different compression characteristics than live action - it often achieves excellent quality at much lower bitrates due to its unique properties (flat colors, sharp edges, less grain). Ideally, we would use higher target ratios for live action and lower ones for animation. However, reliably detecting animated content programmatically is extremely challenging. While we can sometimes identify anime by certain keywords or release group patterns, western animation, partial animation, and CGI-heavy content create too many edge cases for reliable detection. For now, we treat all content with the same metric, acknowledging this as a known limitation of the system. Users seeking optimal results for animated content may want to target lower compression ratios than they would for live action material, perhaps via a duplicate profile at a different compression target. |\n| Why does transparency require 60% at 2160p compared to 40% at 1080p? | The higher ratio requirement for 2160p content stems from several technical factors that compound to demand more data for achieving transparency:
1. **Increased Color Depth**: Most 2160p content uses 10-bit color depth compared to 8-bit for standard 1080p content. This 25% increase in bit depth requires more data to maintain precision in color gradients and prevent banding.
2. **HDR Requirements**: 2160p content often includes HDR metadata, which demands more precise encoding of brightness levels and color information. The expanded dynamic range means we need to preserve more subtle variations in both very bright and very dark scenes.
3. **Resolution Scaling**: While 2160p has 4x the pixels of 1080p, compression efficiency doesn't scale linearly. Higher resolution reveals more subtle details and film grain, which require more data to preserve accurately.
These factors combine multiplicatively rather than additively, which is why we need a 50% increase in the compression ratio ceiling (from 40% to 60%) to achieve similar perceptual transparency. |\n| Do all 2160p releases need 60% for transparency? | No, the actual requirements vary significantly based on several factors:
1. **Content Type**:
- Animation might achieve transparency at 30-40%
- Digital source material (like CGI-heavy films) often requires less
- Film-based content with heavy grain needs the full 60%
2. **HDR Implementation**:
- SDR 2160p content can often achieve transparency at lower ratios
- Dolby Vision adds additional overhead compared to HDR10
- Some HDR grades are more demanding than others
3. **Source Quality**:
- Digital intermediate resolution (2K vs 4K)
- Film scan quality and grain structure
- Original master's bit depth and color space
4. **Scene Complexity**:
- High motion scenes need more data
- Complex textures and patterns require higher bitrates
- Dark scenes with subtle gradients are particularly demanding |\n\n[^1]: Shen, Y. (2020). \"Bjontegaard Delta Rate Metric\". Medium Innovation Labs Blog. https://medium.com/innovation-labs-blog/bjontegaard-delta-rate-metric-c8c82c1bc42c\n[^2]: Ling, N.; Antier, M.; Liu, Y.; Yang, X.; Li, Z. (2024). \"Video Quality Assessment: From FR to NR\". Electronics, 13(5), 953. https://www.mdpi.com/2079-9292/13/5/953",
- "last_modified": "2025-01-26T18:35:26.213813+00:00",
+ "last_modified": "2025-01-27T20:59:29.917199+00:00",
"title": "Encode Efficiency Index",
"slug": "EEi",
"author": "santiagosayshey",
@@ -17,7 +17,7 @@
{
"_id": "GPPi",
"content": "## What are Golden Popcorns?\n\n**_Golden Popcorns_** are _very high quality encodes_, marked as such by one of the best private torrent trackers. These releases are manually reviewed by a dedicated, experienced team of _Golden Popcorn_ checkers. Golden Popcorns are the simplest way to quantify a subjective _best_ encode.\n\n## The Decision Engine\n\nThe Golden Popcorn Performance Index, or GPPI, is a calculated metric, pivotal to the [Transparent](../Profiles/1080p%20Transparent.md) profile's decision-making process. It's engineered to rank release groups based on their propensity to release a Golden Popcorn encode at any given resolution $r$.\n\n## Formula\n\nOn first glance, it seems the most obvious way to determine which release groups are most likely to release golden popcorns is to find their Golden Popcorn Ratio, i.e. The number of Golden Popcorns divided by the total number of encodes for any given resolution _r_.\n\nHowever, If we were to take Golden Popcorn ratio at face value, we might incorrectly prioritise a release group who has a high GP ratio, but a low number of encodes. On the opposite spectrum, if we take the raw number of Golden Popcorns for any group, we might incorrectly prioritise a group with a low GP ratio.\n\nSo instead, we multiply the number of Golden Popcorns at resolution $r$ for a given release group, by a factor of said release group's Golden Popcorn Ratio. This essentially limits both metrics as a factor of each other.\n\nFor any given resolution _r_, the GPPI is defined as:\n\n$$\n\\begin{aligned}\n\\text{GPPI}_r &= GPE_r \\cdot \\left( \\frac{GPE_r}{E_r} \\right) \\\\\n &= \\frac{GPE_r^2}{E_r}\n\\end{aligned}\n$$\n\nWhere:\n\n- $\\text{GPPI}_r$ is the Golden Popcorn Performance Index at resolution $r$\n- $GPE_r$ is the number of Golden Popcorns at resolution $r$\n- $E_r$ is the total number of encodes at resolution $r$",
- "last_modified": "2025-01-26T18:35:26.213813+00:00",
+ "last_modified": "2025-01-27T20:59:29.917199+00:00",
"title": "Golden Popcorn Performance Index",
"slug": "GPPi",
"author": "santiagosayshey",
@@ -32,7 +32,7 @@
{
"_id": "RGP",
"content": "## So, how does Dictionarry _actually simplify media automation?_\n\nWell, first we need to understand that we're trying to **automate the subjective analysis of how \"good\" a release is**. To do that, we need to first define **what \"good\" even means**. To some people, it could mean how well something looks on their screen, or sounds through speakers; we define this as _quality_. To others, it means how many releases they can download while still maintaining some kind of quality standard; we define this as _efficiency_.\n\nSo, that leads us to a new question - _how do we measure quality and efficiency_? You might think we'd want to parse releases and find their technical properties; resolution, bitrate, video / audio codecs, hdr, etc.\n\n```\nRelease 1 (25.2 GiB): Blockbuster Movie A 2022 Hybrid 1080p WEBRip DDPA5.1 x264-group A\n\nRelease 2 (27.3 GiB): Blockbuster Movie A.1080p.WEBRip.DD+7.1.x264-group B\n```\n\nLooking at these two releases, you'll notice that they both have the EXACT same technical specification and would rank equally. But they're different sizes... so which is better? Using audio / video properties to measure quality / efficiency can be effective, but is largely **limited by the information that they convey**. You can't adequately answer which is better just by looking at these releases in isolation. So how do we not look at these releases in isolation? Or rather, how do we _extrapolate information that isn't already there?_\n\n### Group Tags\n\nOur answer lies in the little bit of information at the end of every release - it's **group tag**. Dictionarry tracks historic release group data in order to **rank groups based on their propensity to reach quantifiable levels of quality and efficiency**. We do this using two metrics:\n\n1. Golden Popcorn Performance Index (GPPi): How many golden popcorns a release group has, as a ratio of their total number of releases\n2. Encode Efficiency Index (EEi): The average size of a release group's encode compared to it's likely source.\n\nThese metrics are **evidence based, data driven and objective**.\n\n### TL;DR\n\nTL;DR: Dictionarry **simplifies media automation by prioritizing release groups that achieve quantifiable levels of quality and efficiency through objective measurement**. These release group rankings are built and maintained as custom formats to be scored in their respective quality profiles. You can review these group rankings below.",
- "last_modified": "2025-01-26T18:35:26.213813+00:00",
+ "last_modified": "2025-01-27T20:59:29.917199+00:00",
"title": "Release Group Philosophy",
"slug": "RGP",
"author": "santiagosayshey",
@@ -48,7 +48,7 @@
{
"_id": "home",
"content": "# \ud83d\udc4b Hey!\n\nWelcome to Dictionarry! This project aims to wiki-fy and **simplify media automation** in Radarr / Sonarr through extensive, data driven documentation, custom formats and quality profiles.\n\n---\n\n## \ud83d\udca1 Motivation\n\nNavigating the world of media automation and coming across quality terms like \"Remux\", or \"HEVC\" or \"Dolby Vision\" can be quite daunting when all you want to do is setup a media server to watch some content. If often **feels like you need a masters in audio / video just to grab the latest blockbuster.** Dictionarry aims not to explain these concepts in detail, but **abstract them into more approachable ideas** that don't require extensive knowledge or experience.\n\nDictionarry leverages two key features of Radarr and Sonarr to simplify media automation:\n\n1. Custom Formats - Think of these as smart filters that scan release titles for specific patterns. They help **identify important characteristics** of your media, such as:\n\n - Video quality (4K, HDR, Dolby Vision)\n - Audio formats (Atmos, DTS, TrueHD)\n - Source types (Remux, Web-DL, Blu-ray)\n - Potential issues (upscaled content, poor encodes)\n\n2. Quality Profiles - These act like a scoring system that **ranks releases** based on their Custom Format matches. You can:\n - Prioritize what matters most to you\n - Automatically upgrade to better versions\n - Avoid problematic releases\n\nThink of Dictionarry as your personal car-buying expert: Instead of researching every technical specification and test-driving dozens of vehicles, you get access to a curated showroom of pre-vetted options that match what you're looking for. Whether you want:\n\n- 2160p Remux - **Maximum Quality** 4K HDR remuxes with lossless audio and Dolby Vision\n- 2160p Quality - **Transparent 4K** HDR encodes selected using the Encode Efficiency Index\n- 1080p Quality - **Transparent 1080p** encodes optimized using the Golden Popcorn Performance Index\n- 1080p Efficient - **Efficient x265 1080p** Encodes optimized to save space using the Encode Efficiency Index\n- And More....\n\n\n\nDictionarry's database of tested profiles and formats handles the technical decisions for you.\n\n---\n\n## \u2699\ufe0f Profilarr\n\nThe database by itself does nothing. Custom Formats and Quality Profiles **need to be imported** and configured in your individual arr installations. Rather than leaving you to manually create everything yourself based on our guides, we've created **Profilarr** to automate this process.\n\nProfilarr is a **configuration management tool** for Radarr and Sonarr that can interface with **ANY remote configuration database** (not just Dictionarry's!). It automatically:\n\n- **Pulls** new updates from your chosen database\n- **Compiles** the database format into specific arr formats\n- **Imports** them to your arr installations\n- Manages version control of your configurations\n\nBuilt on top of git, Profilarr treats your configurations like code, allowing you to:\n\n- Track changes over time\n- Maintain your own customizations while still receiving database updates\n- Resolve conflicts between local / remote changes when they arise\n\nThe architecture was specifically built like this to **put user choice first**. We believe that:\n\n- **Your media setup should reflect your needs, not our opinions**\n- Updates should enhance your configuration, not override it\n- Different users have different requirements (storage constraints, hardware capabilities, quality preferences)\n- The ability to customize should never be sacrificed for convenience\n\nProfilarr empowers you to use Dictionarry's database (or anyone elses!) as a foundation while maintaining the freedom to adapt it to your specific needs.",
- "last_modified": "2025-01-26T18:35:26.213813+00:00",
+ "last_modified": "2025-01-27T20:59:29.917199+00:00",
"title": "home",
"slug": "home",
"author": "santiagosayshey",