mirror of
https://github.com/Dictionarry-Hub/database.git
synced 2025-12-10 15:57:00 +00:00
141 lines
100 KiB
JSON
141 lines
100 KiB
JSON
[
|
|
{
|
|
"_id": "EEi",
|
|
"content": "This metric is aimed at identifying and ranking release groups based on their propensity to release **encodes that meet certain compression ratios**, with particular focus on **HEVC** releases where optimal efficiency occurs in specific bitrate ranges. By ranking these groups, we effectively prioritize releases that maximize HEVC's compression capabilities while maintaining quality at minimal file sizes.\n\n## What is a Compression Ratio?\n\nA compression ratio is a (made up) metric that evaluates encodes against their sources. We express this as the **encoded file size as a percentage of its source size** (typically a **remux** or **WEB-DL**).\n\nFor example:\n\n| Movie | Source (Remux) | Encode | Compression Ratio |\n| ------- | -------------- | ------ | ----------------- |\n| Movie A | 40 GB | 10 GB | 25% |\n| Movie B | 30 GB | 6 GB | 20% |\n| Movie C | 50 GB | 15 GB | 30% |\n\n## Why Is This Important?\n\nUnderstanding compression ratios helps balance two competing needs: **maintaining high video quality while minimizing file size**. Modern codecs like **HEVC** have a **\"sweet spot\"** where they deliver excellent quality with significant size savings. Finding this optimal point is crucial because:\n\n- Storage and bandwidth are always **limited resources**\n- Going beyond certain bitrates provides **diminishing quality returns**\n- Different codecs have different **efficiency curves**\n- Release groups need clear standards for **quality vs. size trade-offs**\n\n## What Ratio is Best?\n\nThere's no one-size-fits-all answer when it comes to choosing the perfect compression ratio. The \"best\" ratio **depends entirely on your specific needs**. At 1080p:\n\n- Space-conscious users might prefer **smaller files (5-10% of source)** with quality trade-offs\n- Quality-focused users might push towards **higher quality (30-40% of source)** for transparency\n- Most users find a sweet spot in the middle\n\nHowever, there are technical limits - files larger than **40% for 1080p** and **60% for 2160p** provide no meaningful benefits.\n\n## Why Set Maximum Ratios of 40% and 60%?\n\nThe compression ratio ceilings are set based on different factors for 1080p and 2160p content:\n\n### 1080p (40% Maximum)\n\nThe 40% ceiling for 1080p exists because we can roughly measure where **HEVC stops being efficient compared to AVC**. We do this using two key video quality metrics:\n\n- **VMAF** - analyzes how humans perceive video quality and scores it from 0-100\n- **BD-Rate** - tells us how much smaller one encode is compared to another while maintaining the same quality level\n\nUsing these tools together shows us that:\n\n- HEVC achieves **20-40% smaller files** in the mid-bitrate range (~2-10 Mbps for 1080p)\n- These space savings are consistent across different quality levels\n- Beyond this point, both codecs achieve **near identical quality**\n- At ratios above 40%, **AVC becomes preferred** due to better tooling and quality control\n\n### 2160p (60% Maximum)\n\nThe 60% ceiling for 2160p content is based on different considerations:\n\n- This is approximately where **visual transparency** becomes achievable\n- Higher ratios provide **diminishing returns**\n- At this compression level, content achieves **VMAF scores above 95**\n- **Storage efficiency** becomes critical due to larger base file sizes\n- Quality improvements become **increasingly subtle** beyond this point\n\nRead these articles to better understand how VMAF and BD-Rate tell us how efficient a codec is[^1][^2]:\n\n## How Do We Apply This Index?\n\nThe ranking system works by calculating how close each Release Group / Streaming Service comes to achieving a user's desired compression ratio. This is done through a few key steps:\n\n1. **Delta Calculation**: We calculate the absolute difference (delta) between a group's average compression ratio and the target ratio. For example, if a group averages 25% compression and our target is 20%, their delta would be |25 - 20| = 5 percentage points.\n\n2. **K-means Clustering**: We use k-means clustering to automatically group release groups into tiers based on their deltas. K-means works by:\n - Starting with k random cluster centers\n - Assigning each group to its nearest center\n - Recalculating centers based on group assignments\n - Repeating until stable\n\n# Example Rankings\n\n## 1080p Examples\n\n### Example 1: Users prioritizing storage efficiency (10% target)\n\nUsers might choose this very aggressive compression target when:\n\n- Managing large libraries on limited storage\n- Collecting complete series where total size is a major concern\n- Primarily viewing on mobile devices or smaller screens\n- Dealing with bandwidth caps or slow internet connections\n\n| Tier | Group | Efficiency | Delta |\n| ---- | ----------------------- | ---------- | ----- |\n| 1 | iVy | 9.37% | 0.63 |\n| 1 | PSA | 7.89% | 2.11 |\n| 2 | Vyndros | 16.08% | 6.08 |\n| 2 | Chivaman | 16.80% | 6.80 |\n| 2 | Amazon Prime (H.265) | 16.15% | 6.15 |\n| 3 | Disney+ (H.265) | 20.32% | 10.32 |\n| 3 | TAoE | 22.78% | 12.78 |\n| 3 | QxR | 23.25% | 13.25 |\n| 3 | BRiAN | 25.16% | 15.16 |\n| 3 | Movies Anywhere (H.265) | 26.05% | 16.05 |\n| 4 | MainFrame | 37.63% | 27.63 |\n| 4 | NAN0 | 37.71% | 27.71 |\n\n### Example 2: Users seeking balanced quality and size (25% target)\n\nThis moderate compression target appeals to users who:\n\n- Have reasonable storage capacity but still want efficiency\n- Watch on mid to large screens where quality becomes more noticeable\n- Want a good balance between visual quality and practical file sizes\n\n| Tier | Group | Efficiency | Delta |\n| ---- | ----------------------- | ---------- | ----- |\n| 1 | BRiAN | 25.16% | 0.16 |\n| 1 | Movies Anywhere (H.265) | 26.05% | 1.05 |\n| 1 | QxR | 23.25% | 1.75 |\n| 1 | TAoE | 22.78% | 2.22 |\n| 2 | Disney+ (H.265) | 20.32% | 4.68 |\n| 3 | Amazon Prime (H.265) | 16.15% | 8.85 |\n| 3 | Chivaman | 16.80% | 8.20 |\n| 3 | Vyndros | 16.08% | 8.92 |\n| 3 | MainFrame | 37.63% | 12.63 |\n| 3 | NAN0 | 37.71% | 12.71 |\n| 4 | iVy | 9.37% | 15.63 |\n| 4 | PSA | 7.89% | 17.11 |\n\n## 2160p Examples\n\n### Example 3: Extreme Space Saving (20% target)\n\nThis aggressive 2160p compression appeals to users who:\n\n- Want to maintain a 4K library on limited storage\n- Primarily view content at typical viewing distances where subtle quality differences are less noticeable\n- Need to conserve bandwidth while still enjoying 4K resolution\n- Have a large collection of 4K content and need to balance quality with practical storage constraints\n\nTODO: EXAMPLES\n\n### Example 4: Balanced 4K (40% target)\n\nThis middle-ground approach is ideal for users who:\n\n- Have decent storage capacity but still want reasonable efficiency\n- Watch on larger screens where quality differences become more apparent\n- Want to maintain high quality while still keeping files manageable\n- Need reliable HDR performance without excessive file sizes\n\nTODO: EXAMPLES\n\n### Example 5: Near Transparent Quality (60% target)\n\nThis higher bitrate target is chosen by users who:\n\n- Have ample storage and prioritize maximum quality consciously\n- Watch on high-end displays where subtle quality differences are noticeable\n- Want to maintain archive-quality collections\n- Focus on difficult-to-encode content where compression artifacts are more visible\n\nTODO: EXAMPLES\n\nThese examples demonstrate how different groups excel at different target ratios, and how streaming services tend to maintain consistent compression approaches regardless of user preferences. The rankings help users quickly identify which releases will best match their specific quality and size requirements.\n\n## Frequently Asked Questions\n\n| Question | Answer |\n| -------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Why not just detect h265/x265 releases? Isn't that simpler? | This is a common misconception that \"HEVC = smaller = better\". While it's true that HEVC/x265 _can_ achieve better compression than AVC/x264, simply detecting the codec tells us nothing about the actual efficiency of the specific encode. A poorly encoded HEVC release can be larger and lower quality than a well-tuned x264 encode. By focusing on compression ratio instead of codec detection, we measure what actually matters - how efficiently the release uses storage space while maintaining quality. This approach has several advantages:<br><br>- It rewards efficient encodes regardless of codec choice<br>- It catches inefficient HEVC encodes that waste space<br>- It avoids the complexity of parsing inconsistent HEVC labeling (h265/x265)<br>- It future-proofs the system for newer codecs like AV1, where we can simply adjust our codec ranking priorities (AV1 > HEVC > AVC) while still maintaining the core efficiency metric<br><br>Think of it this way: users don't actually care what codec is used - they care about getting high quality video at reasonable file sizes. Our metric measures this directly instead of using codec choice as an unreliable proxy. |\n| But doesn't this ignore quality? | The current encoding landscape places tremendous emphasis on maximizing absolute quality, often treating file size as a secondary concern. This metric aims to challenge that, or at least find a middle ground - we care about quality (hence why we use proper sources as our baseline and consider VMAF scores), but we acknowledge that most users only care about getting file sizes they actually want, and not the marginal quality improvements you get from encoding from a remux, compared to a web-dl. Rather than taking either extreme position - \"quality above all\" or \"smaller is always better\" - we focus on _efficiency_: getting the best practical quality for any given file size target. This approach **will not** satisfy quality enthusiasts, but it better serves the needs of most users. |\n| What if the source is not a 1080p remux? How do you tell? | This metric, like any data-driven system, will never achieve 100% accuracy. However, we can parse various indicators beyond just the release group or streaming service to identify non-remux sources. For example, we can identify when a non-DS4K WEB-DL or non-webrip from a reputable group is likely sourced from another lossy encode rather than a remux. We also maintain a manual tagging system to downrank certain release groups known for reencoding from non-high-quality sources. Groups like PSA and MeGusta will be ranked lower in the system, regardless of their efficiency scores, due to their known practices. |\n| How do you prefer HEVC? | We actually approach this from the opposite direction - instead of preferring HEVC, we downrank AVC. This is because HEVC naming conventions are inconsistent (groups use x265 and h265 interchangeably), making them difficult to parse reliably. In contrast, AVC is almost always labeled consistently as either x264 or h264, making it much easier to identify and downrank these releases. |\n| Why not consider releases above 40% efficiency? | For standard 1080p non-HDR content, above 40% compression ratio, x264 and x265 perform nearly identically in terms of VMAF scores, eliminating HEVC's key advantages. At this point, x264 becomes the preferred choice across all metrics - the encodes are easier to produce, far more common, and typically undergo more rigorous quality control. There's simply no compelling reason to use HEVC at these higher bitrates for standard 1080p content. |\n| What about animated content? | Animated content typically has different compression characteristics than live action - it often achieves excellent quality at much lower bitrates due to its unique properties (flat colors, sharp edges, less grain). Ideally, we would use higher target ratios for live action and lower ones for animation. However, reliably detecting animated content programmatically is extremely challenging. While we can sometimes identify anime by certain keywords or release group patterns, western animation, partial animation, and CGI-heavy content create too many edge cases for reliable detection. For now, we treat all content with the same metric, acknowledging this as a known limitation of the system. Users seeking optimal results for animated content may want to target lower compression ratios than they would for live action material, perhaps via a duplicate profile at a different compression target. |\n| Why does transparency require 60% at 2160p compared to 40% at 1080p? | The higher ratio requirement for 2160p content stems from several technical factors that compound to demand more data for achieving transparency:<br><br>1. **Increased Color Depth**: Most 2160p content uses 10-bit color depth compared to 8-bit for standard 1080p content. This 25% increase in bit depth requires more data to maintain precision in color gradients and prevent banding.<br><br>2. **HDR Requirements**: 2160p content often includes HDR metadata, which demands more precise encoding of brightness levels and color information. The expanded dynamic range means we need to preserve more subtle variations in both very bright and very dark scenes.<br><br>3. **Resolution Scaling**: While 2160p has 4x the pixels of 1080p, compression efficiency doesn't scale linearly. Higher resolution reveals more subtle details and film grain, which require more data to preserve accurately.<br><br>These factors combine multiplicatively rather than additively, which is why we need a 50% increase in the compression ratio ceiling (from 40% to 60%) to achieve similar perceptual transparency. |\n| Do all 2160p releases need 60% for transparency? | No, the actual requirements vary significantly based on several factors:<br><br>1. **Content Type**:<br>- Animation might achieve transparency at 30-40%<br>- Digital source material (like CGI-heavy films) often requires less<br>- Film-based content with heavy grain needs the full 60%<br><br>2. **HDR Implementation**:<br>- SDR 2160p content can often achieve transparency at lower ratios<br>- Dolby Vision adds additional overhead compared to HDR10<br>- Some HDR grades are more demanding than others<br><br>3. **Source Quality**:<br>- Digital intermediate resolution (2K vs 4K)<br>- Film scan quality and grain structure<br>- Original master's bit depth and color space<br><br>4. **Scene Complexity**:<br>- High motion scenes need more data<br>- Complex textures and patterns require higher bitrates<br>- Dark scenes with subtle gradients are particularly demanding |\n\n[^1]: Shen, Y. (2020). \"Bjontegaard Delta Rate Metric\". Medium Innovation Labs Blog. https://medium.com/innovation-labs-blog/bjontegaard-delta-rate-metric-c8c82c1bc42c\n[^2]: Ling, N.; Antier, M.; Liu, Y.; Yang, X.; Li, Z. (2024). \"Video Quality Assessment: From FR to NR\". Electronics, 13(5), 953. https://www.mdpi.com/2079-9292/13/5/953",
|
|
"last_modified": "2025-03-18T21:23:24.503089+00:00",
|
|
"title": "Encode Efficiency Index",
|
|
"slug": "EEi",
|
|
"author": "santiagosayshey",
|
|
"created": "2024-12-28",
|
|
"tags": [
|
|
"wiki",
|
|
"efficiency",
|
|
"encode"
|
|
],
|
|
"blurb": "A data-driven metric that measures how well release groups balance file size and quality in their encodes, helping users find releases that match their storage and quality preferences."
|
|
},
|
|
{
|
|
"_id": "FAQ",
|
|
"content": "This entry is dedicated to providing answers to the most frequently asked questions about Dictionarry / Profilarr.\n\n| Question | Answer |\n| ------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Why isn't the highest scored release being grabbed? | You may have prefer propers and repacks on. This option forces releases with a proper / repack flag to be grabbed, even if it's Custom Format score is not the highest. To turn it off, navigate to Settings > Media Management > File Management and set Prefer Propers / Repacks to Do Not Prefer. |\n| What's the difference between h264, x264, AVC, h265, x265 and HEVC? | **H.264 (AVC)**: A video compression standard.<br>**x264**: An open source encoder that produces H.264 videos.<br>**H.265 (HEVC)**: A more advanced video compression standard than H.264, offering better compression and quality for 4K and higher resolutions.<br>**x265**: An open source encoder that produces H.265 videos.<br><br>**Key Points**:<br>- HEVC/AVC refers to the codec in general<br>- H.264/5 refers to a lossless rip (WEB-DL or remux)<br>- x264/5 refers to encoded content (WEBRip or Blu-ray encode)<br><br>_Note: Many HEVC files are mislabeled, making it challenging to distinguish between lossless and lossy releases based on release names alone._ |\n| What quality settings should I use? | It's suggested that you should set everything to min / max since Profilarr uses custom formats to do the major selections. However you might run into the occasional sample download if you use lots of usenet indexers. If you do find that these are being grabbed, then you can set the minimum to be 1-2gb per hour for whatever quality you need it in. |\n| What does \"Transparency\" mean? | Audiovisual transparency refers to the degree to which an encoded audio or video signal is indistinguishable from the original source signal. The term \"transparency\" stems from the idea that the encoding and decoding processes are imperceptible, as if the system were _transparent_.<br><br>- An audio codec with high transparency will produce an encoded signal that, when decoded, is identical to the original audio source, without any discernible differences in frequency response, dynamic range, or noise floor.<br><br>- A video codec exhibiting transparency will generate an encoded signal that, upon decoding, results in a picture that is visually indistinguishable from the source video in terms of resolution, color space, and pixel-level detail.<br><br>Objective metrics, such as [VMAF (Video Multi-Method Assessment Fusion)](https://en.wikipedia.org/wiki/Video_Multimethod_Assessment_Fusion), are sometimes used to measure transparency by comparing the encoded signal to the original source and calculating a numerical score that quantifies the perceptual similarity between the two, with higher scores indicating greater transparency. |",
|
|
"last_modified": "2025-03-18T21:23:24.504089+00:00",
|
|
"title": "FAQ",
|
|
"slug": "faq",
|
|
"author": "santiagosayshey",
|
|
"created": "2025-02-02",
|
|
"tags": [
|
|
"wiki",
|
|
"faq"
|
|
],
|
|
"blurb": "Frequently asked questions pertaining to Dictionarry / Profillar and all of its tooling."
|
|
},
|
|
{
|
|
"_id": "GPPi",
|
|
"content": "## What are Golden Popcorns?\n\n**_Golden Popcorns_** are _very high quality encodes_, marked as such by one of the best private torrent trackers. These releases are manually reviewed by a dedicated, experienced team of _Golden Popcorn_ checkers. Golden Popcorns are the simplest way to quantify a subjective _best_ encode.\n\n## The Decision Engine\n\nThe Golden Popcorn Performance Index, or GPPI, is a calculated metric, pivotal to the [Transparent](../Profiles/1080p%20Transparent.md) profile's decision-making process. It's engineered to rank release groups based on their propensity to release a Golden Popcorn encode at any given resolution $r$.\n\n## Formula\n\nOn first glance, it seems the most obvious way to determine which release groups are most likely to release golden popcorns is to find their Golden Popcorn Ratio, i.e. The number of Golden Popcorns divided by the total number of encodes for any given resolution _r_.\n\nHowever, If we were to take Golden Popcorn ratio at face value, we might incorrectly prioritise a release group who has a high GP ratio, but a low number of encodes. On the opposite spectrum, if we take the raw number of Golden Popcorns for any group, we might incorrectly prioritise a group with a low GP ratio.\n\nSo instead, we multiply the number of Golden Popcorns at resolution $r$ for a given release group, by a factor of said release group's Golden Popcorn Ratio. This essentially limits both metrics as a factor of each other.\n\nFor any given resolution _r_, the GPPI is defined as:\n\n$$\n\\begin{aligned}\n\\text{GPPI}_r &= GPE_r \\cdot \\left( \\frac{GPE_r}{E_r} \\right) \\\\\n &= \\frac{GPE_r^2}{E_r}\n\\end{aligned}\n$$\n\nWhere:\n\n- $\\text{GPPI}_r$ is the Golden Popcorn Performance Index at resolution $r$\n- $GPE_r$ is the number of Golden Popcorns at resolution $r$\n- $E_r$ is the total number of encodes at resolution $r$",
|
|
"last_modified": "2025-03-18T21:23:24.504089+00:00",
|
|
"title": "Golden Popcorn Performance Index",
|
|
"slug": "GPPi",
|
|
"author": "santiagosayshey",
|
|
"created": "2023-04-20",
|
|
"tags": [
|
|
"wiki",
|
|
"quality",
|
|
"encode"
|
|
],
|
|
"blurb": "A data-driven metric that identifies high-quality release groups by analyzing their Golden Popcorn track record."
|
|
},
|
|
{
|
|
"_id": "RGP",
|
|
"content": "## So, how does Dictionarry _actually simplify media automation?_\n\nWell, first we need to understand that we're trying to **automate the subjective analysis of how \"good\" a release is**. To do that, we need to first define **what \"good\" even means**. To some people, it could mean how well something looks on their screen, or sounds through speakers; we define this as _quality_. To others, it means how many releases they can download while still maintaining some kind of quality standard; we define this as _efficiency_.\n\nSo, that leads us to a new question - _how do we measure quality and efficiency_? You might think we'd want to parse releases and find their technical properties; resolution, bitrate, video / audio codecs, hdr, etc.\n\n```\nRelease 1 (25.2 GiB): Blockbuster Movie A 2022 Hybrid 1080p WEBRip DDPA5.1 x264-group A\n\nRelease 2 (27.3 GiB): Blockbuster Movie A.1080p.WEBRip.DD+7.1.x264-group B\n```\n\nLooking at these two releases, you'll notice that they both have the EXACT same technical specification and would rank equally. But they're different sizes... so which is better? Using audio / video properties to measure quality / efficiency can be effective, but is largely **limited by the information that they convey**. You can't adequately answer which is better just by looking at these releases in isolation. So how do we not look at these releases in isolation? Or rather, how do we _extrapolate information that isn't already there?_\n\n### Group Tags\n\nOur answer lies in the little bit of information at the end of every release - it's **group tag**. Dictionarry tracks historic release group data in order to **rank groups based on their propensity to reach quantifiable levels of quality and efficiency**. We do this using two metrics:\n\n1. Golden Popcorn Performance Index (GPPi): How many golden popcorns a release group has, as a ratio of their total number of releases\n2. Encode Efficiency Index (EEi): The average size of a release group's encode compared to it's likely source.\n\nThese metrics are **evidence based, data driven and objective**.\n\n### TL;DR\n\nTL;DR: Dictionarry **simplifies media automation by prioritizing release groups that achieve quantifiable levels of quality and efficiency through objective measurement**. These release group rankings are built and maintained as custom formats to be scored in their respective quality profiles. You can review these group rankings below.",
|
|
"last_modified": "2025-03-18T21:23:24.504089+00:00",
|
|
"title": "Release Group Philosophy",
|
|
"slug": "RGP",
|
|
"author": "santiagosayshey",
|
|
"created": "2025-01-26",
|
|
"tags": [
|
|
"home",
|
|
"wiki",
|
|
"release_group",
|
|
"philosophy"
|
|
],
|
|
"blurb": "Explore Dictionarry's release group abstraction philosophy and what it actually means to simplify media automation."
|
|
},
|
|
{
|
|
"_id": "development",
|
|
"content": "Profilarr functions as both a synchronization tool for end users and a complete development platform for developers. While most users will simply connect to existing databases to receive updates, Profilarr's development capabilities allow for creating, testing, and contributing custom media configurations back to the community through its Git integration.\n\n## Setting Up Your Database Repository\n\nTo use Profilarr's development features, you'll need a GitHub repository for your database. You have two options:\n\n### Option 1: Fork a PSF Database\n\n1. Go to https://github.com/Dictionarry-Hub/database (or any other Profilarr Standard Format Database)\n2. Click the \"Fork\" button in the top-right corner\n3. Follow the prompts to complete the fork process\n4. Your forked repository will now be ready to use with Profilarr\n\n### Option 2: Create a New Database Repository\n\n1. Click the \"+\" in the top-right corner and select \"New repository\"\n2. Give your repository a name (like \"profilarr-database\")\n3. Set visibility to public or private as needed (it needs to be public if you intend to share it)\n4. Click \"Create repository\"\n5. Clone the repository to your local machine\n6. Create three folders: `custom_formats`, `regex_patterns`, and `profiles`\n7. Add a `.gitkeep` file in each folder (this empty file is necessary to ensure Git tracks these folders; otherwise, they won\u2019t be included in the repository, which may cause errors in Profilarr)\n8. Commit and push these changes to your repository\n\n## Development Configuration\n\n### Generate a GitHub Personal Access Token (PAT)\n\nTo allow Profilarr to connect and push to your remote database, you'll need to generate a GitHub Personal Access Token (PAT). This token gives Profilarr permission to access and update your GitHub repository.\n\n1. Sign in to your GitHub account\n2. Go to Settings > Developer settings > Personal access tokens\n3. Click \"Generate new token\"\n4. Choose **Fine-grained**\n5. Give your token a descriptive name (e.g., \"Profilarr Development\")\n6. Apply the following permissions:\n - **Repository access:** Select your database repository\n - **Permissions:** Set `contents` and `metadata` to **Read & Write**\n7. Click \"Generate token\"\n8. Copy your new token (make sure to save it somewhere safe, as you won\u2019t be able to see it again)\n\n### Configure Your User Information\n\nYou'll also need to provide a username and email for Git. These will be associated with any commits you make to the database:\n\n- **Username**: This will appear in commit logs and will be visible to other contributors\n- **Email**: This will be used for Git commits and may be visible in public repositories\n\n### Create an Environment File\n\nCreate a `.env` file with the following information. This is required for database contributions:\n\n```\nGIT_USER_NAME=your_username\nGIT_USER_EMAIL=your_email\nPROFILARR_PAT=your_github_pat\n```\n\n\u26a0 **Security Note:** Avoid committing `.env` files containing secrets to public repositories. If working on a shared system, store credentials in a separate `.env.local` file or configure them directly in Docker. To ensure these files are ignored by Git, add the following entry to your `.gitignore` file:\n\n```\n.env\n.env.local\n```\n\n## Setup\n\nWith your credentials configured, you can now deploy Profilarr for development.\n\n### Docker Compose (recommended)\n\n```yaml\nservices:\n profilarr:\n image: santiagosayshey/profilarr:latest # or :beta for pre-release versions\n container_name: profilarr\n ports:\n - 6868:6868\n volumes:\n - /path/to/your/data:/config\n environment:\n - TZ=UTC # Set your timezone\n env_file:\n - .env # Required for database contributions\n restart: unless-stopped\n```\n\n### Docker CLI\n\n```bash\ndocker run -d \\\n --name=profilarr \\\n -p 6868:6868 \\\n -v /path/to/your/data:/config \\\n -e TZ=UTC \\\n --env-file .env \\\n --restart unless-stopped \\\n santiagosayshey/profilarr:latest # or :beta for pre-release versions\n```\n\n### Unraid\n\nFor Unraid users, the Profilarr Community App includes placeholders for required environment variables. To enable development mode, you must replace these placeholders with your actual credentials:\n\n- `GIT_USER_NAME`\n- `GIT_USER_EMAIL`\n- `PROFILARR_PAT`\n\n## Verification\n\nTo confirm that everything is set up correctly, check the startup logs for Git user initialization. The logs should include entries similar to the following:\n\n```\nprofilarr | 2025-03-18 20:08:35 - app.init - INFO - Initializing Git user\nprofilarr | 2025-03-18 20:08:35 - app.init - INFO - Configuring Git user\nprofilarr | 2025-03-18 20:08:35 - app.init - DEBUG - Retrieved Git config: Name - santiagosayshey, Email - user@example.com\nprofilarr | 2025-03-18 20:08:35 - app.db.queries.settings - DEBUG - PAT status verified\nprofilarr | 2025-03-18 20:08:35 - app.init - INFO - Git user configuration completed\nprofilarr | 2025-03-18 20:08:35 - app.init - INFO - Git user initialized successfully\n```\n\n## Troubleshooting\n\nIf you encounter issues with your development setup:\n\n| Issue | Possible Solution |\n| -------------------------------------------- | ----------------------------------------------------------------------------------- |\n| **GitHub token not working** | Verify your PAT has `contents` and `metadata` read/write permissions |\n| **Profilarr fails to access the repository** | Ensure your repository is public (or your token has access to private repositories) |\n| **Git username/email not recognized** | Run `git config --global user.name` and `git config --global user.email` to verify |\n| **Cannot push to repository** | Ensure your container has network access to GitHub (try `ping github.com`) |\n| **Updated `.env` not applied** | Remove and recreate the container to reload environment variables |\n\nFor additional help or to contribute to Profilarr, join our community on [GitHub](https://github.com/santiagosayshey/profilarr) or [Discord](https://discord.gg/Y9TYP6jeYZ).\n\n## Contributing to Databases\n\n1. **Link Your Fork in Profilarr**\n\n - Open Profilarr and navigate to the database settings.\n - Enter the GitHub repository URL of your forked database.\n\n2. **Make Changes in Profilarr**\n\n - Use Profilarr's built-in tools to modify or add database entries.\n - Profilarr will handle formatting and validation automatically.\n\n3. **Commit and Push Changes**\n\n - Profilarr provides actions to **revert, stage, commit, and push** changes.\n - After making changes, stage them using the **Stage** button.\n - Once staged, commit the changes with a commit message.\n - Finally, use the **Push** button to send your changes to your GitHub fork.\n - Roll back any unwanted changes using the **Revert** button.\n\n4. **Create a Pull Request (PR)**\n - Go to your fork on GitHub and navigate to the \"Pull Requests\" tab.\n - Click \"New pull request\" and select your fork and branch.\n - Provide a clear description of the changes and submit the PR.\n - Wait for review and approval before merging.\n\n### \u26a0 Editing Databases Directly\n\nWhile it's possible to edit database files manually in an IDE or on GitHub, this is not recommended unless you fully understand Profilarr\u2019s formatting and validation rules. Profilarr enforces constraints to ensure data integrity, and bypassing these safeguards can lead to:\n\n- Corrupted or invalid files that Profilarr cannot process correctly.\n- Unexpected behavior when syncing with Profilarr.\n- Inconsistent formatting, leading to rejected updates.\n\nTo make modifications, it's strongly advised to use Profilarr\u2019s built-in editing tools whenever possible. If direct edits are necessary, always validate the changes in a local instance of Profilarr before pushing them to the repository.",
|
|
"last_modified": "2025-03-18T21:23:24.504089+00:00",
|
|
"title": "Development Setup",
|
|
"slug": "development-setup",
|
|
"author": "santiagosayshey",
|
|
"created": "2025-03-19",
|
|
"tags": [
|
|
"home",
|
|
"wiki",
|
|
"setup",
|
|
"install",
|
|
"develop"
|
|
],
|
|
"blurb": "Comprehensive guide for setting up Profilarr for database development"
|
|
},
|
|
{
|
|
"_id": "edition",
|
|
"content": "By default, Dictionarry's profiles prefer the ['Special' Edition](https://dictionarry.dev/formats/special-edition) of each movie. This is because these editions are often considered the more 'definitive' version of the movie because they contain the director's complete creative vision without studio interference or runtime constraints, and are often recommended over their theatrical counterparts.\n\n| Movie | Preferred Version | Reasons |\n| ----------------------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Aliens (1986) | Special | James Cameron's Special Edition enhances the film with crucial character development, particularly the scenes about Ripley's daughter which add emotional depth to her relationship with Newt. While the theatrical cut has tighter pacing, the added content like the sentry gun sequences adds valuable world-building and tension. The colony scenes provide important context that enriches rather than spoils the story. |\n| Blade Runner (1982) | Final Cut | The Final Cut (2007) is considered the definitive version over theatrical, workprint, and Director's Cut releases. It removes the theatrical's controversial voice-over narration and \"happy ending\" that were studio-mandated and disliked by cast and crew. It preserves the original's ambiguous ending about Deckard's nature while fixing numerous continuity errors and technical issues. Key improvements include: cleaned up wire removal in spinner scenes, fixed lip sync in Zhora's death scene, digital correction of the obvious stunt double's face, properly matching the number of replicants mentioned to those shown, correction of the dove release scene's obvious day-for-night shooting, improved color timing that better matches Jordan Cronenweth's original cinematography, and restoration of the full unicorn dream sequence that better supports the film's central mysteries. While some defend elements of other versions (particularly the 1992 Director's Cut), the Final Cut represents Ridley Scott's complete creative vision with modern technical capabilities to properly realize it. |\n| The Lord of the Rings Trilogy (2001-2003) | Extended Editions | Each film's Extended Edition adds crucial character development, world-building and plot points that enrich the story: Fellowship adds the gift-giving scene and more Lothlorien. Two Towers expands Boromir/Faramir's backstory, adds Theodred's funeral for deeper Rohan culture. Return of the King adds the Witch King destroying Gandalf's staff, Saruman's fate, and House of Healing. The additional 30-50 minutes per film are so seamlessly integrated that many fans consider these the definitive versions. |\n| Batman v Superman: Dawn of Justice (2016) | Ultimate Edition | The 3-hour cut restores crucial plot threads that explain character motivations and fill plot holes. Added scenes show Superman actually helping people, Lex's manipulation of both heroes, and clearer reasons for the African incident blamed on Superman. The extended cut makes the story more coherent while better developing both protagonists' perspectives. |\n| The Abyss (1989) | Special Edition | The extended version restores a crucial tidal wave sequence that better explains the aliens' motivations and adds a stronger environmental message to the ending. Additional scenes provide more context for the NTIs (non-terrestrial intelligence) and their purpose, while expanding character relationships. Most notably, the restored ending gives the film a more impactful and complete conclusion that Cameron originally intended. |\n| Midsommar (2019) | Director's Cut | The 171-minute version adds key scenes that provide deeper insight into the relationship dynamics, particularly Christian's gaslighting of Dani. Additional folk-horror rituals and customs make the H\u00e5rga community feel more developed and their practices more grounded. The added character moments make the emotional climax more impactful. |\n| I Am Legend (2007) | Alternate Version | This version's different ending completely changes the meaning of the title and stays truer to Richard Matheson's novel. Instead of Smith's character killing himself to stop the creatures, he realizes they are actually intelligent beings protecting their own, making him the monster of their legends - their \"legend.\" This ending better serves the film's themes about humanity and perspective. |\n| Watchmen (2009) | Director's Cut | The 186-minute version adds essential character depth and crucial plot elements from the graphic novel, including more of Hollis Mason and his death scene. The extended cut better develops the complexity of the alternate 1985 setting and the moral ambiguity of its characters. The Ultimate Cut, which adds the Tales of the Black Freighter animation, is considered by some fans to be even more complete, though the Director's Cut is the most widely preferred version. |\n| Superman II (1980/2006) | The Richard Donner Cut | Released 26 years after the theatrical version, Donner's cut restores his original vision before he was replaced by Richard Lester. It removes the slapstick comedy, restores Marlon Brando's scenes as Jor-El, and features a different ending that ties better to the first film. The more serious tone and stronger character development make it the preferred version for most fans. |\n\nHowever, while special editions often expand and enrich films, theatrical versions have their own merits that many cinephiles and critics prefer. Theatrical cuts typically offer tighter pacing, maintain the mystery of intentional ambiguity, and preserve the historical significance of films as they were originally experienced by audiences. Here's why some prefer theatrical versions:\n\n| Movie | Preferred Version | Key Reasons |\n| --------------------------------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Terminator 2: Judgment Day (1991) | Theatrical | The theatrical cut is nearly perfect in pacing and storytelling. The extended cut's additional scenes (like T-1000 glitching after freezing, John reprogramming the T-800) are interesting but unnecessary. The theatrical version maintains better tension and momentum. Most notably, the \"happy ending\" playground scene in the theatrical cut is preferred to the extended cut's darker alternate ending. |\n| Alien (1979) | Theatrical | The theatrical version is considered a masterpiece of pacing. The Director's Cut adds scenes that, while interesting (like Ripley finding Dallas in the cocoon), actually harm the rapid-fire tension of the final act. Scott himself has stated he prefers the theatrical cut. |\n| Star Wars (1977) | Theatrical | The original theatrical cut is considered more pure and less cluttered than later \"Special Editions\". Fans particularly dislike added CGI elements and the infamous \"Han shot first\" change. The pacing of the theatrical cut is also tighter. |\n| The Empire Strikes Back (1980) | Theatrical | Like A New Hope, fans strongly prefer the unaltered theatrical version. The Special Edition's added CGI and altered effects (like the Emperor hologram replacement, added windows in Cloud City) are considered unnecessary changes to a perfect film. The original practical effects and cinematography are considered superior. |\n| Return of the Jedi (1983) | Theatrical | The theatrical version is preferred over the Special Edition's controversial additions, particularly the changed ending music and added CGI celebration scenes. The \"Jedi Rocks\" musical number in Jabba's Palace is one of the most criticized Special Edition changes. The original Ewok celebration song \"Yub Nub\" is often preferred to the new ending. |\n| Apocalypse Now (1979) | Theatrical | While Redux (2001) and the Final Cut add interesting material, many feel the additions (especially the French plantation sequence) harm the pacing and dilute the core narrative. The theatrical cut maintains better tension and forward momentum. |\n| The Exorcist (1973) | Theatrical | \"The Version You've Never Seen\" adds the famous \"spider walk\" scene and several other moments, but the theatrical cut's pacing is superior. The original version better maintains its sense of building dread. |\n| Donnie Darko (2001) | Theatrical | The Director's Cut over-explains the film's mythology through added scenes and graphics, removing much of the mystery that made the original so compelling. The theatrical cut's ambiguity encourages viewer interpretation. |\n| Amadeus (1984) | Theatrical | The theatrical cut maintains better pacing and tighter focus on the central Salieri-Mozart conflict. Director's Cut adds 20 minutes of historical context and servant relationships that, while interesting, don't enhance the core psychological drama. The theatrical version better preserves the opera-like structure of the narrative. |\n| Payback (1999) | Theatrical | The theatrical version's blue-tinted color scheme better fits the neo-noir tone. The original ending with Kris Kristofferson provides a more satisfying conclusion than the Director's Cut (\"Straight Up\" version\"). Mel Gibson's voice-over is more engaging, and the slightly lighter tone makes Porter more sympathetic while maintaining the film's edge. Despite extensive studio interference, the theatrical cut became more commercially and critically successful. |\n| Almost Famous (2000) | Theatrical | While the \"Untitled: The Bootleg Cut\" adds interesting character moments and music scenes, the theatrical cut's tighter 122-minute runtime provides better pacing and more focused storytelling. Cameron Crowe's theatrical version better captures the whirlwind feeling of being on tour, while the 40 extra minutes in the extended cut, though enjoyable for fans, can make the journey feel too leisurely. |\n\nA [Custom Format: Special Edition (Unwanted)](<https://dictionarry.dev/formats/special-edition-(unwanted)>) has been created to negate special editions for these specific movies, but does not yet work due to radarr/sonarr's parsing of release titles. The parsed 'Title' is removed from the release title, so you can't actually identify movies from custom formats (yet). Once this becomes possible, a single profile will be able to selectively prefer theatrical releases over special ones.\n\nTo mimic this behaviour in the current system, you have to copy the profile you want to use and set it's `Special Edition` score to the negative of whatever it was. Then apply the profile to whatever movie you want in it's theatrical version.",
|
|
"last_modified": "2025-03-18T21:23:24.504089+00:00",
|
|
"title": "Edition Philosophy",
|
|
"slug": "edtion-philosophy",
|
|
"author": "santiagosayshey",
|
|
"created": "2025-02-26",
|
|
"tags": [
|
|
"wiki",
|
|
"edition",
|
|
"extras"
|
|
],
|
|
"blurb": "A comparison of theatrical vs. special edition cuts and which movies benefit from each format."
|
|
},
|
|
{
|
|
"_id": "home",
|
|
"content": "# \ud83d\udc4b Hey!\n\nWelcome to Dictionarry! This project aims to wiki-fy and **simplify media automation** in Radarr / Sonarr through extensive, data driven documentation, custom formats and quality profiles.\n\n## \ud83d\udca1 Motivation\n\nNavigating the world of media automation and coming across quality terms like \"Remux\", or \"HEVC\" or \"Dolby Vision\" can be quite daunting when all you want to do is setup a media server to watch some content. It often **feels like you need a masters in audio / video just to grab the latest blockbuster.** Dictionarry aims not to explain these concepts in detail, but **abstract them into more approachable ideas** that don't require extensive knowledge or experience.\n\nDictionarry leverages two key features of Radarr and Sonarr to simplify media automation:\n\n1. Custom Formats - Think of these as smart filters that scan release titles for specific patterns. They help **identify important characteristics** of your media, such as:\n\n - Video quality (4K, HDR, Dolby Vision)\n - Audio formats (Atmos, DTS, TrueHD)\n - Source types (Remux, Web-DL, Blu-ray)\n - Potential issues (upscaled content, poor encodes)\n\n2. Quality Profiles - These act like a scoring system that **ranks releases** based on their Custom Format matches. You can:\n - Prioritize what matters most to you\n - Automatically upgrade to better versions\n - Avoid problematic releases\n\nThink of Dictionarry as your personal car-buying expert: Instead of researching every technical specification and test-driving dozens of vehicles, you get access to a curated showroom of pre-vetted options that match what you're looking for. Whether you want:\n\n- 2160p Remux - **Maximum Quality** 4K HDR remuxes with lossless audio and Dolby Vision\n- 2160p Quality - **Transparent 4K** HDR encodes selected using the Encode Efficiency Index\n- 1080p Quality - **Transparent 1080p** encodes optimized using the Golden Popcorn Performance Index\n- 1080p Efficient - **Efficient x265 1080p** Encodes optimized to save space using the Encode Efficiency Index\n\n\n\nDictionarry's database of tested profiles and formats handles the technical decisions for you.\n\n## \u2699\ufe0f Profilarr\n\nThe database by itself does nothing. Custom Formats and Quality Profiles **need to be imported** and configured in your individual arr installations. Rather than leaving you to manually create everything yourself based on our guides, we've created **Profilarr** to automate this process.\n\nProfilarr is a **configuration management tool** for Radarr and Sonarr that can interface with **ANY remote configuration database** (not just Dictionarry's!). It automatically:\n\n- **Pulls** new updates from your chosen database\n- **Compiles** the database format into specific arr formats\n- **Imports** them to your arr installations\n- Manages version control of your configurations\n\nBuilt on top of git, Profilarr treats your configurations like code, allowing you to:\n\n- Track changes over time\n- Maintain your own customizations while still receiving database updates\n- Resolve conflicts between local / remote changes when they arise\n\nThe architecture was specifically built like this to **put user choice first**. We believe that:\n\n- **Your media setup should reflect your needs, not our opinions**\n- Updates should enhance your configuration, not override it\n- Different users have different requirements (storage constraints, hardware capabilities, quality preferences)\n- The ability to customize should never be sacrificed for convenience\n\nProfilarr empowers you to use Dictionarry's database (or anyone elses!) as a foundation while maintaining the freedom to adapt it to your specific needs.\n\n## \ud83d\udd28 Development Notice\n\nProfilarr 1.0.0 is out now in open beta! https://dictionarry.dev/wiki/profilarr-setup",
|
|
"last_modified": "2025-03-18T21:23:24.504089+00:00",
|
|
"title": "home",
|
|
"slug": "home",
|
|
"author": "santiagosayshey",
|
|
"created": "2025-01-21",
|
|
"tags": [
|
|
"home",
|
|
"wiki"
|
|
]
|
|
},
|
|
{
|
|
"_id": "profilarr-casaos",
|
|
"content": "This guide will walk you through the process of installing Profilarr as a custom app in Casa OS.\n\n## Prerequisites\n\n- A working Casa OS installation (this guide uses v0.4.15).\n- Basic knowledge of using the Casa OS interface.\n- Access to [https://github.com/Dictionarry-Hub/Profilarr](https://github.com/Dictionarry-Hub/Profilarr) for install file.\n\n## Step-by-Step Installation\n\n1. **Add a Custom App to Casa OS:**\n - Open your web browser and navigate to your Casa OS dashboard.\n - Find and click on the \"+\" icon in the top right corner of the App section.\n - Select \u201cInstall a customized app\u201d\n - Select \u201cImport\u201d in the top right corner of the Settings page\n2. **Import Docker Compose File:**\n - Navigate to [https://github.com/Dictionarry-Hub/Profilarr](https://github.com/Dictionarry-Hub/Profilarr)\n - Scroll down to the \u201cInstallation\u201d section\n - You will see a **Docker Compose (recommended) **code block\n - Copy the Docker Compose file code\n - Navigate back to Casa OS to the Import Docker Compose page and paste the code into the empty text box\n - Note: if you are not contributing to a database, delete the following section or Casa OS will throw an error that the file is missing:\n - `env_file:`\n - `- .env # Optional: Only needed if contributing to a database`\n - Click on \u201cSubmit\u201d and click \u201cOK\u201d to the warning\n3. **Profilarr App Details:**\n - You can leave most settings as default unless you have a specific reason to change them, like customizing to your network/system (Network, Port, Volumes, etc..) otherwise just change your Time Zone in Environmental Variables\n - **Name:** \u201cProfilarr\u201d - but you can change it if you want\n - **Icon:** (Optional) You can upload an icon for the app.\n - **Web UI:** Should be your host device IP address\n - **Network:** Should be bridge\n - **Port:** Should be 6868 TCP\n - **Volumes:** Leave this as default unless you want to change the host path to a specific location\n - **Environment Variables:** (Only TZ is required, the others are optional)\n - TZ = Your Timezone (e.g., America/New_York)\n - GIT_USER_NAME = GitHub username for contributing\n - GIT_USER_EMAIL = GitHub email for contributing\n - PROFILARR_PAT = GitHub Personal Access Token for contributing\n4. **Install the App:**\n - Once you've filled in all the necessary details, click on the \"Install\" button.\n5. **Wait for Installation:**\n - Casa OS will now download and install the app. This might take a few minutes.\n6. **Access Profilarr:**\n - After installation is complete, you should be able to find Profilarr on your Casa OS dashboard. Click on it to launch the app.",
|
|
"last_modified": "2025-03-18T21:23:24.504089+00:00",
|
|
"title": "Casa OS - Profilarr Installation Guide",
|
|
"slug": "profilarr-casaos",
|
|
"author": "lawgics",
|
|
"created": "2025-02-26",
|
|
"tags": [
|
|
"wiki",
|
|
"casaos",
|
|
"installation",
|
|
"profilarr",
|
|
"docker",
|
|
"containers"
|
|
],
|
|
"blurb": "A simple guide to install Profilarr in Casa OS as a custom app."
|
|
},
|
|
{
|
|
"_id": "profilarr-setup",
|
|
"content": "Profilarr is a **custom format / quality profile management tool** that acts as a middleman between a configuration database and your radarr/sonarr installations. It automatically:\n\n- **Pulls** new updates from your chosen database\n- **Compiles** the database format into specific arr formats\n- **Imports** them to your arr installations\n- Manages **version control** of your configurations\n\n## Installation\n\nProfilarr follows the GitFlow workflow for development:\n\n- New features are first merged into the `develop` branch for testing\n- Once stable, these features move to the `main` branch\n- For early access to new features, use `santiagosayshey/profilarr:beta`\n- For stable use, use `santiagosayshey/profilarr:latest`\n\nOnce installed, you can visit the web UI at `http://[address]:6868` and begin the setup process.\n\n### Docker\n\n#### Docker Compose (recommended)\n\n```yaml\nservices:\n profilarr:\n image: santiagosayshey/profilarr:latest # or :beta\n container_name: profilarr\n ports:\n - 6868:6868\n volumes:\n - /path/to/your/data:/config\n environment:\n - TZ=UTC # Set your timezone\n env_file:\n - .env # Optional: Only needed if contributing to a database\n restart: unless-stopped\n```\n\n#### Docker CLI\n\n```bash\ndocker run -d \\\n --name=profilarr \\\n -p 6868:6868 \\\n -v /path/to/your/data:/config \\\n -e TZ=UTC \\\n --env-file .env \\ # Optional: Only needed if contributing to a database\n --restart unless-stopped \\\n santiagosayshey/profilarr:latest # or :beta\n```\n\n#### Volumes\n\nWhen configuring the volume mount (`/path/to/your/data:/config`):\n\n- Replace `/path/to/your/data` with the actual path on your host system\n- **Windows users:** The database is case-sensitive. Use a docker volume or the WSL file system directly to avoid issues\n - Docker volume example: `profilarr_data:/config`\n - WSL filesystem example: `/home/username/docker/profilarr:/config`\n\n### CasaOS\n\nView lawgics' CasaOS setup guide [here:](https://dictionarry.dev/wiki/profilarr-casaos)\n\n### Development\n\nIn addition to being a 'sync' tool for end users, Profilarr also acts as a development platform for people to work on, and contribute to, a remote database. Read [here](https://dictionarry.dev/wiki/development) to learn more on how to setup Profilarr for development.\n\n## Usage\n\n### Credentials Setup\n\nThe first time you visit the web UI at `http://[address]:6868`, you'll be prompted to setup login credentials.\n\n- Make sure you keep note of these credentials, as you won't be able to reset the password if you forget it later on (unless you have access to the filesystem and can interact with the docker container.)\n\n\n\n### Configuration Workflows\n\nOnce you've setup your user credentials you can start working on your media configurations. You have the choice to either:\n\n1. Connect to an external database, make changes, receive updates and handle change conflicts.\n - This is what most people will be using if they don't want to build configurations from scratch.\n2. Use Profilarr completely locally, without a database.\n - This option is left for people who want the advantages of Profilarr's compilation system (single definition profiles, tweaks, better management, etc), but don't want to be tied to any one database. Skip ahead to [Making Changes](#making-changes)\n\n#### Connecting to a Database\n\nProfilarr leverages Git to create an open-source configuration sharing system. To get started, navigate to `Settings -> Database`, and link a repository.\n\n\n\n| # | Feature | Description |\n| --- | -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| 1 | Database information | Contains basic information about the database - Name, Owner, Stars/Issues/PRs |\n| 2 | Status Container | - View outgoing changes (any local changes you've made to the database)<br>- View incoming changes (any changes pushed to a remote database that haven't been applied to your local one)<br>- View merge conflicts (when you've made changes to a file that also has incoming changes) |\n| 3 | Commit / Change Log | - View logs of all prior changes applied to your database<br>- If your HEAD is out of date with the remote, it will only show commits after the commit diversion |\n| 4 | Unlink Repo | - Remove the currently linked repo<br>- Choose to either keep the current files and stop receiving updates<br>- Or remove all files and sync to a completely different database instead |\n| 5 | Current Branch | - Databases may choose to maintain stable / beta versions of their configurations via branches<br>- You would choose your preferred configuration path here (must will just use stable) |\n| 6 | Auto Sync | - Option to let Profilarr automatically pull in new updates without consulting you first.<br>- Useful if you want to connect to a database, receive updates and forget about it after<br>- If a pull causes a merge conflict, Profilarr will pause mid merge and let your resolve the conflicts manually before continuing |\n\n**NOTE**: The database must adhere to the Profilarr standard format to work correctly with Profilarr (ie configurations must be made / edited inside profilarr and not externally).\n\n- Profilarr does not ensure that every public database will adhere to this format, nor work properly with them (only our own - the Dictionarry database).\n\nThe following sections will use the [Dictionarry Database](https://github.com/Dictionarry-Hub/database) for demonstration purposes.\n\n#### Getting Updates\n\nDatabases are likely to change overtime; they might receive new features such as edition formats, or new quality profiles targeting anime releases. They might fix bugs with regex patterns, or improve descriptions and tags. Since Profilarr connects to a Git repository, it can take advantage of Git's version control capabilities to show when your local database is out of sync with the remote database.\n\nWhen updates are available, Profilarr will display them in the Status Container section of the Database page (provided you don't have auto pull enabled):\n\n\n\n1. **Incoming Changes**: Shows all changes that have been pushed to the remote database but haven't yet been applied to your local installation\n - Each change will show a single file each\n - Changes will usually be marked as tweaks, additions, removals, renames, etc.\n - You can the 'View Changes' button, which will open a modal that shows the associated commit + message, and the exact fields that have changed\n\n\n\n2. **Update Process**:\n\n - Click the \"Pull Changes\" button to apply all incoming changes to your local database\n - Profilarr will automatically merge these changes with your local setup\n - If you've enabled Auto Sync in settings, these updates will be applied automatically\n - Once pulled, your database will go back to being in sync\n - It is currently not possible to pick and choose updates yet, but this feature will be looked at in future\n\n3. **Update History**:\n - All successfully applied updates are logged in the Commit/Change Log section\n - This provides a complete history of changes applied to your database\n - You can use this log to track when specific features were added or modified\n - While technically feasibly, Profilarr does NOT allow you to go back to a certain commit for interoperability reasons.\n\n#### Making Changes\n\nDatabases are meant to act as 'starting points' for your setup:\n\n- Some may be broad and have a variety of profiles to use\n- Others might be incredibly niche and focus on small but important philosophies.\n- Even Dictionarry's database, that aims to be both broad and niche at the same time is also just a starting point.\n\nYou have the power to make changes to _whatever_ you want, and still receive updates from a database. To make changes, you simply interact with the configs you want to change and save them - just as you would in Radarr / Sonarr.\n\n- You can change file names, regex patterns, descriptions, format scores, quality groups - whatever you want.\n- You can view these changes in the database tab just as you would see incoming changes.\n\n\n\nFrom this point, you have a few choices. You can either:\n\n- **Revert changes.** Have you ever made changes to your quality profiles and wanted to change it back but couldnt because you couldn't remember what it used to be? Well since we operate within Git, you can revert a file back to it's previous 'stable' state using `git revert`. It's as simple as pressing a button now.\n- **Commit Changes**. When you're satisfied with your modifications and want to preserve them, you need to stage and commit them to your local Git repository. This creates a permanent record of your customizations that Profilarr can reference when pulling updates from the remote database.\n\n\n\n| # | Action | Description |\n| --- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| 1 | Stage | - Marks modified files to be included in your next commit<br>- This is the preparation step before saving changes permanently<br>- You can select which specific files to stage, allowing you to group related changes together<br>- Staged files appear in a separate section in the interface<br>- Files must be staged before they can be committed (Git's two-phase commit process ensures you review changes before finalizing them) |\n| 2 | Unstage | - Removes files from the staging area that you previously staged<br>- Useful when you accidentally stage files or decide not to include certain changes in your commit<br>- The file remains modified in your working directory, but won't be included in the next commit<br>- You can only select and unstage files that are currently in the staging area |\n| 3 | Commit | - Permanently saves all staged changes to your local Git repository<br>- Requires a commit message that describes what changes were made and why<br>- Creates a checkpoint you can revert to later if needed<br>- **Important**: All staged files will be committed, not just selected ones<br>- After committing, these changes become part of your local configuration history<br>- This is the crucial step that allows Profilarr to track your customizations separately from the original database |\n| 4 | Revert | - Returns a file to its previous state before your modifications<br>- Especially useful when you've made changes you no longer want to keep<br>- You can only revert non committed changes<br>- This preserves the history of changes while effectively canceling out unwanted modifications |\n| 5 | Push | - Sends your local commits to the remote database<br>- **Only relevant for database contributors and developers**<br>- Requires appropriate permissions to the remote repository<br>- Regular users don't need to worry about this action |\n\n##### Why Commits?\n\nYou might wonder: \"Why do I need to manually stage and commit changes? Why doesn't Profilarr just save them automatically?\" The answer lies in Profilarr's core philosophy of balancing customization with ongoing updates:\n\n**Breaking the \"All or Nothing\" Model**: Traditional tools force you to choose - either use their configurations exactly as provided, or be cut off from future updates once you make changes. When you commit in Profilarr, you're creating clear markers that tell the system \"these parts are my customizations.\" This allows Profilarr to know exactly which parts to preserve when new updates arrive and which parts can be safely updated.\n\nTechnically, Git is creating snapshots of your configurations at specific points in time. When you commit changes, Git records the exact differences between the original file and your modified version. Later, when pulling updates, Git analyzes these differences alongside the incoming changes and intelligently determines how to combine both sets of modifications without losing either. Without these explicit commit markers, there would be no reliable way to perform this merge operation.\n\nWhile Profilarr could theoretically automate the staging and committing process, we've deliberately kept it manual. This is because Profilarr also serves as a development platform, and developers need precise control over when and how their changes are saved. Automatic commits would be frustrating for database contributors who are testing various configurations and don't want every experimental change permanently recorded. This manual approach gives both end users and developers the flexibility they need without compromising functionality.\n\nWhile the extra step might seem clunky at first, it's the mechanism that enables Profilarr's unique ability to let you personalize configurations while still receiving ongoing improvements. The alternative would be returning to the \"use our configs exactly as provided or you're on your own\" approach of other tools.\n\n#### Handling Merge Conflicts\n\nEven with Git's intelligent merging, sometimes you'll encounter situations where both you and the remote database have modified the same parts of the same files. When this happens, Profilarr needs your help to determine which changes to keep.\n\n##### When Conflicts Occur\n\nMerge conflicts might arise in such scenarios like this:\n\n- You've customized a quality profile to allow AV1 encodes\n- Meanwhile, the remote database has updated the same profile to allow AV1 encodes, but at a reduced score pushed up by other formats\n- Both changes affect the same file.\n\nWhen incoming changes affect files you've modified, Profilarr will mark them with a \"Potential Conflict\" label in the Status Container's incoming changes.\n\n\n\nWhen you attempt to pull these changes, the database will enter a \"Merge Conflict\" state.\n\n- At any point, you can choose to abort the merge and go back to your previous database state.\n- You will not however, be able to pull in any new updates until the merge conflict has been resolved.\n\n\n\n##### Resolving Conflicts\n\nIn the Merge Conflict state:\n\n1. Profilarr prevents you from making changes to other files until all conflicts are resolved\n2. The interface displays each conflicting field side-by-side, showing \"Yours\" (your version) and \"Theirs\" (remote version)\n3. You must resolve conflicts field-by-field, file-by-file\n4. For each field, you choose whether to keep your version or adopt the remote changes\n5. After resolving a conflict (but before completing the merge), you can edit your choices in case you change your mind\n\n\n\nHere, the user has chosen to:\n\n- Accept the incoming changes for two custom formats (360p and 2160p Quality Tier 5)\n- Keep their local score change for AV1\n\n##### After Resolution\n\nOnce you've resolved all conflicts for all files, you can commit the merge changes:\n\n\n\n1. Non-conflicting files that were part of the pull are automatically merged\n2. Your resolved files maintain the exact choices you made during conflict resolution\n3. Your local database returns to a \"in sync\" state with the remote\n4. Normal operations can resume until the next update or change\n\nThis process ensures you get the best of both worlds - keeping your important customizations while still benefiting from improvements in the remote database. While it may seem complex at first, this approach gives you complete control over how updates are integrated with your personalized setup.\n\n#### Profilarr Quirks\n\nProfilarr has made some changes to the way custom formats and quality profiles are built. Here's a basic overview of the biggest differences compared to standard Radarr/Sonarr configurations:\n\n| Feature | Description |\n| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Reusable Regex Patterns | - Regex patterns are now separate from custom formats and referenced by name<br>- This allows reusing the same pattern in multiple places<br>- Changes to a pattern automatically apply everywhere it's used<br>- At compile time, pattern names are resolved to their actual regex expressions for the \\*arr apps |\n| Conditional Format Import | - Custom formats with a score of 0 are not included in profiles (unless specifically added in selective mode)<br>- This helps keep your profiles cleaner by excluding unused formats |\n| Enhanced Sorting | - Additional methods for sorting, scoring, and searching files |\n| Language Handling | - Complete overhaul of language management<br>- All profiles set language to \"Any\" and use language custom formats based on preferences<br>- Options include:<br> \u2022 \"Any\" - No language filtering<br> \u2022 \"Must Include\" - Ensures releases contain at least your preferred language<br> \u2022 \"Must Only Be\" - Ensures releases contain ONLY your preferred language |\n| Documentation-Focused | - Tags and descriptions are stored in Profilarr but removed during compilation<br>- These elements are purely for documentation and organization |\n| Integrated Testing | - Regex patterns and custom formats include testing functionality<br>- Used in continuous integration to ensure changes don't break existing functionality<br>- Helps maintain compatibility as configurations evolve |\n| Single Definition | - Profiles and custom formats are defined once in Profilarr<br>- Automatically converted to appropriate Radarr/Sonarr syntax at compile time<br>- Eliminates need to maintain separate definitions unless different logic is required |\n\n#### Git Gud\n\nProfilarr attempts to make Git accessible to all users. However, there are some aspects of it that can't be completely simplified or safeguarded against. Understanding these key concepts will help you avoid common pitfalls and get the most out of the system, even if you've never used Git before.\n\n| Topic | Guidance |\n| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Commit Messages | - Write clear, descriptive commit messages that explain what you changed and why<br>- Good messages help you track your history and understand changes months later<br>- Examples: \"Adjusted AV1 score to prioritize quality over filesize\", \"Added support for anime dual-audio formats\" |\n| Avoiding File Deletion | - Deleting files should be a last resort, not a go-to solution<br>- When you delete a file that exists in the remote database, it will cause merge conflicts when that file receives updates<br>- Instead of deleting, consider:<br> \u2022 Disabling formats you don't want to import<br> \u2022 Renaming files to indicate they're not in use<br> \u2022 Using comments to note why you're not using certain configurations |\n| Commit Size | - Smaller commits that focus on specific changes are easier to manage<br>- They make conflict resolution simpler when conflicts occur<br>- Example: Commit changes to anime profiles separately from changes to movie profiles |\n| Reviewing Changes | - Always review what you're about to stage using the \"View Changes\" feature<br>- Make sure each change is intentional and correct<br>- This helps prevent accidental modifications from being committed |\n| Backups | - Before making significant changes, consider exporting your configurations<br>- This provides a fallback if something goes wrong<br>- Most issues can be resolved, but having a backup gives peace of mind |\n| Abandoned Changes | - If you have unstaged changes you no longer want, use the \"Revert\" option<br>- Don't leave unwanted changes hanging around - they'll complicate future operations |\n\n### Importing\n\nOnce you've setup your media configuration workflow you can setup external apps which Profilarr will attempt to sync with. You need to setup:\n\n\n\n#### Type / Server\n\nThere can sometimes be API changes that break Profilarr's import functionality, so version limits on the apps it can import to are enforced - these are often rare and are usually fixed quickly.\n\n#### Sync Settings\n\n| Sync Method | Description |\n| ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Manual | - Go to the format/profile page and enter select mode (button in top right toolbar or Ctrl+A)<br>- Select specific files you want to import and where you want to import them<br>- Gives you full control over what configurations are synced to which applications<br>- Best for users who want to carefully manage what gets imported |\n| On Pull | - Automatically syncs selected files whenever the database receives an update<br>- When combined with Auto Pull, allows Profilarr to work completely autonomously |\n| On Schedule | - Similar to On Pull, but runs on a schedule of your choosing<br>- Set specific times/intervals for Profilarr to check for changes and import them<br>- Useful for controlling when system resources are used for synchronization<br>- Good compromise between automation and control<br>- Creates a scheduled task that you can also trigger manually anytime you want |\n| Import as Unique | - Works with any of the sync choices above<br>- Appends a unique identifier to imported files<br>- Allows you to use your Profilarr database alongside different tools/configs<br>- Example: Run TRaSH guides + Notifiarr configurations simultaneously with your Profilarr configs<br>- Prevents name conflicts when using multiple configuration sources |\n\n#### External App Setup\n\nIn future updates (hopefully soon), Profilarr will handle a quick setup sync (changing media management, quality slider settings, etc), but for now you need to change these things manually.\n\n| Setting | Recommendation | Explanation |\n| ------------------- | -------------------------- | --------------------------------------------------------------------------------------------------- |\n| Propers and Repacks | Set to \"Do Not Prefer\" | Other options will override custom formats and make Radarr/Sonarr grab things we don't want |\n| Quality Sliders | Set min/max for everything | Custom formats will do 99% of the ranking and using any other settings just gets in the way usually |\n\n ",
|
|
"last_modified": "2025-03-18T21:23:24.504089+00:00",
|
|
"title": "Profilarr Setup",
|
|
"slug": "profilarr-setup",
|
|
"author": "santiagosayshey",
|
|
"created": "2025-03-01",
|
|
"tags": [
|
|
"home",
|
|
"wiki",
|
|
"setup",
|
|
"install"
|
|
],
|
|
"blurb": "Comprehensive setup and usage guide for Profilarr."
|
|
}
|
|
] |